This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 75 | ← | Archive 79 | Archive 80 | Archive 81 | Archive 82 | Archive 83 | → | Archive 85 |
Can someone make a bot to automatically update the Wikipedia:List of Wikipedians by number of DYKs. Just like Wikipedia:List of Wikipedians by featured list nominations and Wikipedia:List of Wikipedians by featured article nominations. Thanks. ~~ CAPTAIN MEDUSA talk 18:37, 15 June 2020 (UTC)
Coding... - I'm just making the script to get the data. Once that's working I'll look at making the bot to update the table Pi (Talk to me!) 17:53, 22 June 2020 (UTC)
Hi. Is there a bot which can monitor a category, such as the category that {{ helpme}} requests are added to, and leave notifications of each new addition on-wiki at a specified target page, such as my personal talk page? Just checking as I am looking for something similar for AfC WikiProject, and I suspect that it might be already implemented. Prior discussion one, prior discussion two. Can look at implementing it in Python or Nodejs or Perl, but I hope that perhaps there is an existing bot for such a task. Thank you in advance for your advice. -- Gryllida ( talk) 05:13, 29 June 2020 (UTC)
Hi all, hope you are well in this crazy time period. I am seeking a bot that will:
This is per the discussion here: Wikipedia_talk:WikiProject_Medicine#Proposal_to_remove_ICD_codes_from_templates, essentially the reasons being that they clutter the titles and don't help editors.
An example of this would be here:
The codes are: "C44.L40–L68/D23.L15–49, 173/216"
; each is linked to a respective ICD9 and ICD10 category; Wikidata would need to be updated and then these removed from the title. We did this a few years ago within the Anatomy space; ping to
Nihlus who was very helpful then. Please let me know if there's any additional information that I can provide to help. Many thanks, --
Tom (LT) (
talk) 23:43, 15 July 2020 (UTC)
C44.L40–L68/D23.L15–49, 173/216must be added to skin cancer (Q192102), right? and/or to WD-items listed in this navbox, like papillary eccrine adenoma (Q7132983)?)
Please consider this request to be suspended / closed until I get the Wikidata component sorted. Many thanks -- Tom (LT) ( talk) 03:14, 17 July 2020 (UTC)
Template:Date is supposed to be used only in templates but there's more than a few used otherwise. A simple "subst" won't work as a significant portion of the uses are inside <ref> where subst does not work.
A bot to process these would be appreciated. -- Izno ( talk) 00:15, 4 July 2020 (UTC)
Since this bot it down, there a request to replace one of its functions at Wikipedia:Bots/Requests for approval/ProcBot 3. However, there is a second task it does: updating Wikipedia:Sockpuppet investigations/Cases/Overview. We may need someone to create a bot to fill that function. Ping Amalthea, ProcrastinatingReader and Xaosflux --- C& C ( Coffeeandcrumbs) 14:17, 22 July 2020 (UTC)
Hey geniuses, I was looking at this version of Bigg Boss Tamil 3 and noted that
| first_aired = 23 June 2019
| last_aired = 6 October 2019
was problematic, because these dates should be properly formatted for Template:Infobox television. So I wondered if there was a bot that could look at these parameters, then look to see if there is one of the {{ Use DMY dates}} or {{ Use MDY dates}} templates on the page, and adjust accordingly, with a result of:
| first_aired = {{Start date|df=y|2019|06|23}}
| last_aired = {{End date|df=y|2019|10|06}}
or
| first_aired = {{Start date|2019|06|23}}
| last_aired = {{End date|2019|10|06}}
Depending on whatever date format it finds.
Also, could this be incorporated into an existing bot? Don't we have maintenance bots that could be looking for stuff like this?
Thanks! Cyphoidbomb ( talk) 18:44, 6 July 2020 (UTC)
{{#ifexpr: {{Str find|{{lc:{{{first_aired|}}}}}|start date}} > 1 | [[Category:Pages using infobox television with nonstandard dates]]}}
with a similar tracker for end date as well.
Primefac (
talk) 00:40, 20 July 2020 (UTC)
{{Str find|{{lc:{{{first_aired|}}}}}|may}}}}
seems to return "1" for example, which makes me think |first_aired=
is already passed through the start date template by the time it's evaluated here. Will read through some docs.
ProcrastinatingReader (
talk) 17:05, 20 July 2020 (UTC)
Yeah, you're right,
Gonnym, I didn't realize that it would parse the {{
start date}} template before it hit the infobox call. In that case, you'll be wanting {{#if:{{{first_aired|}}}|{{#ifexpr: {{Str find|{{{first_aired|}}}|dtstart }} < 1 | [[Category:Pages using infobox television with nonstandard dates]]}}}}
and using dtend
for the end date. Really nice, actually because it means that you don't have to worry about template redirects. I've tested it in the sandbox and it looks good to me, but if someone else wants to run it through the paces before we go live let me know.
Primefac (
talk) 21:42, 20 July 2020 (UTC)
|first_aired
param.
Primefac (
talk) 21:57, 20 July 2020 (UTC)
This is now Done by User:ProcBot. Archiving. ProcrastinatingReader ( talk) 16:42, 9 September 2020 (UTC)
The links of sources for the land uses of the municipalities (within the geography section) in switzerland point to a web page that is no longer available (e.g. Bulle) and only some of them have been linked to wayback machine. Can someone link the rest of them to wayback machine?-- Horizon Sunset ( talk) 17:44, 22 July 2020 (UTC)
Can I get technical support for creating DetectiveBot, Nihaal The Wikipedian ( talk) 13:07, 2 September 2020 (UTC)
Primefac I want this bot to be at least 1.5x faster than ClueBot NG. Detect,revert,report and also block when needed . This bot is very likely to have false positives too, so help might be needed. Nihaal The Wikipedian ( talk) 05:42, 3 September 2020 (UTC)
@ Redrose64 and Primefac:. Then there is messaging bot which helps people properly ping and send messages to people. Free for everyone, a simple tool. Nihaal The Wikipedian ( talk) 05:41, 4 September 2020 (UTC)
I need help for that . My idea . Nihaal 03:48, 9 September 2020 (UTC)
Look at the two most recent redirects that I have created. There were communities at the name with the state disambiguator, but the base name was a redlink. Is there any way to do what I just did for every article that is in the form of "[anything], [state/province/country]" with a corresponding redlink? It would mostly need to run only once, but it could run again for a minor update every 3 months or so. HotdogPi 11:20, 20 July 2020 (UTC)
Query 46704 lists all of the talk namespace redirects that point to an article. Usually, if "A" redirects to "B" and "Talk:A" is also a redirect, then "Talk:A" should redirect to "Talk:B", not "B".
So, I think that we should have a bot that lists all of the talk page to mainspace redirects on a single page (perhaps a user subpage for the bot, or a " database reports subpage"). After that, the bot will find all of the redirects that do not include a slash (slashes indicate subpages), and fix them to point to the talk page of the mainspace target instead. If "Talk:A" happens to redirect to "A", then the (admin)bot would delete "Talk:A" because otherwise, it would redirect to itself. There are currently 1292 talk namespace redirects that point to articles (plus possibly some more due to a database replication lag). GeoffreyT2000 ( talk) 20:46, 30 July 2020 (UTC)
I see a lot of users like BD2412 going around a just blanking IP talk pages with the {{ OW}} template and I thought "Boy isn't that a tedious job" and then I thought "well, let's get a bot to do that". Here's my suggestion if this hasn't already been introduced or already given to a bot as a task:
A Bot that would go around and search for Old IP warnings/blocks (ones more than a month old [excluding block template, which would need more time]) and get rid of them by replacing them with {{ OW}}.
Best, P,TO 19104 ( talk) ( contribs) 01:39, 7 August 2020 (UTC).
It seems this issue has been discussed before in the following places:
It also seems in the past this was a very contriversial issue, so it was necessary to go to the VP. P,TO 19104 ( talk) ( contribs) 13:39, 7 August 2020 (UTC)
I am surprised to find that our hundreds of thousands of images falling under the Category:Fair use images structure have no categorization by date. This is important because all of these images will eventually fall into the public domain, based on the passage of time. Most images that have been uploaded have a "date" field, and although a large subset of these are filled out as "unknown", that also should be categorized. In short, I would like a bot to parse the images falling under this category and create and populate all needed subcategories for, e.g., Category:Fair use images created in 1952. BD2412 T 15:34, 7 August 2020 (UTC)
Website | Links |
---|---|
British Thoracic Society | 15 |
MeSH | 14 |
... | ... |
This might need two steps (per WAID, below) - generating a list of articles, and then generating a list of external links that have been used.
Make maintenance easier, by:
Discussions as to appropriate venue, and one not relevant to request
|
---|
|
Thanks :) Close enough. I withdraw this request for the moment so that bot editors can focus on more worthy targets -- Tom (LT) ( talk) 23:52, 18 September 2020 (UTC)
Sorry for the messy entry, I'm not familiar with bot procedures but I wanted to flag this as it seems to have gone under the radar.
It would appear that a few months ago the content at Template:Germanic philology was merged into Template:Germanic languages, and the former was redirected to the latter. However, many pages included both templates, and so now they instead contain two copies of the same template (as the philology template simply reproduces the content of the languages template). For instance: Fingallian, Germanic philology. I am unsure how many pages this may affect.
To fix this, it seems it would be worthwhile to instruct a bot to:
Thanks. BlackholeWA ( talk) 02:44, 27 August 2020 (UTC)
Consider an open source software developed for the Wikipedia/Wikimedia movement, usually the tasks are tracked in tracking systems designed by and for software developer, such as Github Issues or Phabricators, but their major audience, i.e. people who cares their progress the most, people they need to solicit feedback the most, are on Wikipedia.
Hereby propose the idea to create a OpenSourceSyncBot to sync a Wikipedia page, e.g. mw:ORES/Synced_tasks with a search criteria in its relevant tracking system, e.g. ORES component on Phabricator.
Phase 1: for any tasks in the tracking system, sync them onto the subpage on Wikipedia, so it gives people more transparency and visibility to the development progress. The format could be a Wikipedia page table with "task title, progress, reportee, assignee". e.g. It will also periodically sync to update such information.
Phase 2: for newly added row on the Wikipedia page table, added by a Wikipedia user, it will create a new task on the external tracking system.
Proposer: xinbenlv Talk, Remember to "ping" me 23:06, 14 August 2020 (UTC)
For example, WP:Twinkle developers can better reach out their users Wikipedia:Twinkle#Reporting bugs or requesting features
(This is not a request for a bot. This is a request for a sanity check.)
WP:ELN is discussing the ==External links== section of Mary Tyler Moore. It contains (in part) this list:
* {{NYTtopic|people/m/mary_tyler_moore/}} * {{IMDb name|1546}} * {{tcmdb name|id=134771|name=Mary Tyler Moore}} * {{iBDB name|023123}} * {{findagrave|175697586}}
This is not an unusual set of links for BLP articles. Obviously, the exact list of links and the order they're presented in varies. Most of them use external link templates.
Imagine a future in which we developed a consensus that some/all of this "standard link dump" should be combined into a single template, perhaps similar to Template:Authority control. Am I correct that it would (if that magical future arrives) be a relatively simple matter for a bot to remove some of these (existing) items from this list and transform them into the new template, in at least most articles? If it's harder than it sounds, then I'd rather know that in advance. (Please ping me.) WhatamIdoing ( talk) 17:48, 19 July 2020 (UTC)
|imdb=1546
to have it kick out the IMDb link. I suppose that could be doable, but I don't think you'll ever get consensus to basically turn five templates into "five templates plus a wrapper template for them all".
Primefac (
talk) 18:26, 19 July 2020 (UTC)
{{new thing |NYTtopic=people/m/mary_tyler_moore/ |IMDb name=1546 |tcmdb name=134771 |iBDB name=023123 |findagrave=175697586}}
and have the template display the same links more compactly.
WhatamIdoing (
talk) 22:18, 19 July 2020 (UTC)
This isn't another request (straight away) for a bot request but rather (at the moment) only a request to see if anyone has the skills to create the code for it. I placed a request at Wikipedia:Bot requests/Archive 79#Civil parish bot and there was discussion at Wikipedia:Village pump (proposals)/Archive 160#Civil parish bot and User talk:DannyS712/Archive 12#Json format that coding was needed. The basic format is at User:Crouch, Swale/Bot tasks/Civil parishes (current)/Simple and I have attempted to do coding at User:Crouch, Swale/Bot tasks/Civil parishes (current)/Coded. I don't have the skills to do the JSON bit so I'm wandering if anyone does? If not then this can be archived and I can get on with looking at creating them manually, thanks. Crouch, Swale ( talk) 20:58, 23 September 2020 (UTC)
The current way of dealing with double redirects is slow and inefficient. A far simpler way to deal with them would be to simply have a bot that detects when a new redirect is created, either from a merger, or as a new page. If it finds a double redirect, it will fix it instantly. The current system is slow, and redirects can take several days to fix. If sinebot is able to sign posts in talk and user talk namespaces almost instantly, how hard can it be for a bot to fix double redirects faster? I-82-I | TALK 07:48, 29 August 2020 (UTC)
The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
I'm not sure if such a bot already exists, but shouldn't there be an automated script that tags categories under C1 if they remain empty for an allotted time (i.e.: six hours)? ToThAc ( talk) 22:04, 7 October 2020 (UTC)
Usually editors moving pages don't move the editnotice attached to it. Either they forget, or aren't admin/TE so they don't do it (or both). {{ Editnotice/notice}} categorises such cases into Category:Editnotices whose targets are redirects (which I've been updating for a while) but it often takes months for the job queue to go over transclusions and add moved pages into this cat (see this on VPT), which makes it hard to even do this manually. I'm thinking a bot would (a) be able to do this sooner and get around that issue and (b) actually just do the move automatically, suppressing the redirect. One way would be to listen to Special:Log/move and check if a editnotice for page exists, this could be done continuously. Another is to regularly loop over all transclusions of {{ Editnotice}} (or [[ Special:PrefixIndex/Template:Editnotices/) daily and do the moves. There's <20k so this is feasible, I think, but this would leave a period of up to 24 hours (ideally, the editnotice shouldn't just be disappearing for a day, especially when they're ones required for DS etc). Thoughts on these options, or other alternative methods? ProcrastinatingReader ( talk) 16:41, 9 September 2020 (UTC)
This is a formal request to recruit @ Yobot: to tag talk pages under WikiProject Phoenicia. Please tag the pages under Category:Phoenicia; no auto-rating. Thanks ~ Elias Z. ( talkallam) 13:34, 1 September 2020 (UTC)
For many years, U.S. college articles were using manually updated tables like this to represent admissions statistics. Following a WikiProject Higher Education discussion, we've begun replacing them with {{ Infobox U.S. college admissions}}, which uses data available from the Common Data Set (and I think also IPEDS) for everything (except the optional historical test score parameters). Symbols for historical data are chosen automatically using the new {{ Fluctuation formatter}} I created.
Would anyone be interested in starting work on a bot that could gather the data and use it to update the templates automatically every year? Given the number of colleges in the U.S., doing so will save likely hundreds of hours of editor work per year. {{u| Sdkb}} talk 20:06, 29 August 2020 (UTC)
Participation at AfD often requires considerable research and debate to find consensus. It's therefore understandable that people get frustrated when, sometime after it's closed, the article is renominated without them knowing. Given how few participants many AfDs have, it sometimes happens that a well attended AfD is overturned by a much smaller group. But even when that doesn't happen, the second (or subsequent) nomination loses out on the efforts of those who researched before.
Anyone willing to make a bot that would look for "nomination)" in the title (or some other method of determining renominations) and, based on an opt-in list, notify past participants (if they want)? — Rhododendrites talk \\ 04:11, 30 August 2020 (UTC)
Just to clarify, in case it's unclear, when I say "opt-in" I intended that to mean opt-in for the service, and not on the level of the individual AfD. i.e. "I want this in general" rather than "if this specific article is renominated, I want to be notified". — Rhododendrites talk \\ 19:25, 31 August 2020 (UTC)
Hello! I originally posted this on WP:AWBREQ, but a bot makes more sense. Currently, articles about curlers use various combinations of {{ Sports links}}, {{ WCT}}, {{ WCF}}, {{ CurlingZone}}, and other templates for external links, but they can all be simplified to just {{ Sports links}}, which would standardize our templates moving forward. Could a bot check all pages that use {{ WCF}}, {{ WCT}}, and {{ CurlingZone}}; remove those templates in the external links section (but not other article sections), along with {{ SR/Olympics profile}}, {{ IOC profile}}, {{ COC profile}}, {{ USOPC profile}}, {{ Olympedia}}, and {{ Olympic Channel}} (all of which are redundant with {{ Sports links}}); and then add {{ Sports links}} if it's not already there? Thanks! Allthegoldmedals ( talk) 11:59, 20 August 2020 (UTC)
More or less the same thing as
Wikipedia:Bots/Requests for approval/DYKHousekeepingBot, which Shubinator says they doesn't have time to revive. The idea is to crawl
Category:Pages using DYK talk with a missing entry, find the missing DYK blurbs, and add |entry=
to these article's {{
DYK talk}} templates on their talk pages.
For instance, 1st Polish Light Cavalry Regiment of the Imperial Guard has the DYK blurb (found in Wikipedia:Recent additions/2009/April)
In this case,
Talk:1st Polish Light Cavalry Regiment of the Imperial Guard should be updated with {{DYK talk|...|entry=... that [[light cavalry|light-cavalrymen]] of the '''[[Polish 1st Light Cavalry Regiment of the Imperial Guard]]''' saved [[Napoleon I of France|Napoleon]]'s life at least three times?}}
Headbomb { t · c · p · b} 04:40, 9 October 2020 (UTC)
This might be something that'd more have to be done with AWB (in which case I'd appreciate advice on how), but to lay it out: I fairly often come across instances of e.g. Star Trek: The Next Generation that are not italicized. I can think of very few instances where this wikitext would show up, including the link, but we would not want to italicize. Would it be possible to get a bot to go around and identify instances of missing italicizations and fix them? (Italicization obviously isn't the most pressing issue facing the 'pedia, but since it is visible to readers, I don't think WP:COSMETICBOT applies.) {{u| Sdkb}} talk 21:10, 7 September 2020 (UTC)
I can think of very few instances where this wikitext would show upare there any? If so, might not be a good task for bot, as it wouldn't be able to differentiate here. ProcrastinatingReader ( talk) 16:47, 9 September 2020 (UTC)
This would be a bot to remove 404 (the ones that appear red) links. — Preceding unsigned comment added by Moouser ( talk • contribs) 22:33, 16 November 2020 (UTC)
A recent change to the MediaWiki software has started assigning a tracking category,
Category:Pages with non-numeric formatnum arguments, to pages that contain invalid input to formatnum
, which is supposed to be given only numeric input. I have edited a few templates to get the article count down from about 150,000 to the current 31,000, but there are some instances of errors within articles that need to be corrected.
One of the errors is invalid input to currency templates, including {{
US$}}, {{
CAD}}, and other templates in
Category:Currency templates. The invalid input often looks like {{US$|75 million}}
, which should be written {{US$|75}}{{nbsp}}million
.
Here's a sample fix.
This search shows some of the 500+ articles that have invalid text in {{ US$}}. The "insource" regex in the search shows the most common construction of the invalid text, and creating a regex to fix the affected articles should be easy. The tricky part is doing the same fix for about 50 templates and their redirects.
Is there anyone here who would be willing to work with me to fix these errors? I can create a list of probable articles and templates that are involved (although I don't know how to create a list of all of the possible redirects). I estimate that the affected article count is between 1,000 and 3,000. – Jonesey95 ( talk) 15:29, 29 September 2020 (UTC)
formatnum
can produce "unreliable output". It looks like the MW developers have deprecated and started tracking this non-numeric input (see
T237467 and
T263592) as of sometime in the last week, so we either need to fix existing uses or write a new template. It would be great to have a new template that does what formatnum
does; if you start developing such a template (it should have a better name than the poorly chosen "formatnum"), ping me and I'll help with QA. –
Jonesey95 (
talk) 18:19, 29 September 2020 (UTC)
A lot of
rcat templates specify the
printworthyness of
redirects through the |printworthy=
parameter of {{
Redirect template}}. All of these have, in their
documentation, a notice asking editors to also add {{
R printworthy}} or {{
R unprintworthy}} (as appropriate) to redirects categorised by the template, if in the
mainspace. However, very few editors actually take notice of this instruction, so how about a bot to do this instead?
The bot would be implemented (I hypothesise; I've never actually done this myself) by running through Category:Printworthy redirects and Category:Unprintworthy redirects, checking if each page includes {{ R printworthy}} or {{ R unprintworthy}}, and adding the relevant template if the answer is no (within an {{ Rcatshell}} if there is one).
Any thoughts? WT79 ( speak to me | editing patterns | what I been doing) 17:13, 9 September 2020 (UTC)
|printworthy=
parameter adds the pages to
Category:Printworthy redirects, if they are in the main namepace. However, {{
R printworthy}} / {{
R unprintworthy}} are the standard
Rcat templates to use to mark redirects as
printworthy / un
printworthy; these are used separately to other templates on the redirect. They may be used other rcats which specify printworthyness, so isn't just part of {{
Redirect template}}, which is only supposed to be used as a meta-template. If their functionality was merged into {{
Redirect template}}, and {{
R printworthy}} and {{
R unprintworthy}} replaced with '{{Redirect template|printworthy=<!--yes or no as appropriate-->}}
', a reverse problem would be caused as {{
R printworthy}}/{{
R unprintworthy}} would need to be removed from pages where printworthiness is already specified, to avoid duplication.
WT79 (
speak to me |
editing patterns |
what I been doing) 14:23, 14 September 2020 (UTC) (edited 16:16, 12 October 2020 (UTC))Scrolling through Wikipedia:Typo Team/moss/E, I noticed that a majority of typos marked are incorrect spacing after periods.As an example, I would like to name the typo I just made between "periods" and "as". Now, to qualify for correction, the words would have to:
Interested to hear what you think. Opalzukor ( talk) 16:11, 16 September 2020 (UTC)
I very often come across talk pages that are archiving either way too aggressively or (less frequently) not at all aggressively enough. Since the frequency of new talk page threads is something quantifiable, I'd think it'd be possible to use an algorithm to determine when this is happening and automatically adjust the archiving period. I envision that this would be only for mainspace talk pages, since non-mainspace pages have differing desires for how long old threads ought to stick around. Integrating with the current manual adjustment system would be tricky, but this could eventually save a bunch of editor effort and make talk pages function better. {{u| Sdkb}} talk 04:29, 13 October 2020 (UTC)
|age=auto
at
User:ClueBot III/ArchiveThis and make it suggested/default, which handles the issue for new pages going forward. Then, once that's been established for a while, we could start considering mass switches for existing pages, but even then I'd assume we'd want to allow opting out. {{u|
Sdkb}}
talk 23:44, 13 October 2020 (UTC)
Related to this: a bot to keep automatic archival information templates such as {{
archives}} or {{
auto archiving notice}} in synch with actual bot parameters. That is, if we change |algo=old(60d)
to |algo=old(90d)
(this example uses
User:Lowercase sigmabot III syntax) a bot could come in and change |age=60
to |age=90
of such a template, if present.
CapnZapp (
talk) 17:05, 21 October 2020 (UTC)
|algo=
value.
Primefac (
talk) 19:34, 8 November 2020 (UTC)On certain pages, it would be useful to have a bot automatically do null edits after a certain period. I'm thinking placing something {{ Bot purge}} like
{{Bot purge}} <!-- Purges every day (00:00:01 UTC)--> {{Bot purge|12 hours}} <!-- Purges every 12 hours (00:00:01 UTC; 12:00:01 UTC)--> {{Bot purge|1 hour|mode=null}} <!-- Null edits every 1 hour (00:00:01 UTC; 01:00:01 UTC; 02:00:01 UTC...)--> {{Bot purge|15 minutes}} <!-- Purges every 15 minutes (00:00:01 UTC; 00:15:01 UTC; 00:30:01 UTC...)--> {{Bot purge|UTC=20:00:00}} <!-- Purges at 20:00:00 UTC every day-->
on a page, and then the bot taking its instructions from there. Headbomb { t · c · p · b} 18:54, 22 October 2020 (UTC)
|mode=purge
vs |mode=null
for the cases where it matters. I know that for the usages I have in mind, purges are insufficient. Ultimately it doesn't really matter, as long we have a scalable user-friendly way to get bots to purge/null edit certain pages.
Headbomb {
t ·
c ·
p ·
b} 20:28, 22 October 2020 (UTC)
{{Bot purge}}
is actually used. So if you have it on e.g.
User:AAlertBot/Status2, then only that page would get bot-purged, and not the pages that transclude
User:AAlertBot/Status2. But I'm spitballing ideas here, it could be handy to have transclusions get purged too. Perhaps |scope=transclusions
/ |scope=this page
? Limiting to metaspace (i.e. not articles, not mainspace talk) would also likely be a good initial limitation.
Headbomb {
t ·
c ·
p ·
b} 22:54, 22 October 2020 (UTC)
only that page would get bot-purged, and not the pages that transclude...: That is not how the job queue works, AFAIK. Pages that are null-edited get put in the job queue to have their transclusions null-edited as well (eventually). I think a purge runs only on the purged page, though, with no downstream effects. – Jonesey95 ( talk) 23:00, 22 October 2020 (UTC)
{{subst:void}}
I believe, and that certainly works.
Headbomb {
t ·
c ·
p ·
b} 21:11, 23 October 2020 (UTC)
forcelinkupdate
requests through the API. --
Redrose64 🌹 (
talk) 10:15, 24 October 2020 (UTC)
For the record, I speedily approved this, and leave implementation details to @ ProcrastinatingReader: and the community in general. If anyone has a problem with that, I can rescind approval. Headbomb { t · c · p · b} 18:32, 29 October 2020 (UTC)
Hello. I'm not too familiar with Wikipedia bots, but I'm wondering if one exists that eliminates double spaces in pages ("[][]" instead of "[]"). I do a lot of control-F work to eliminate these spaces, but I think this is the kind of task that would be best completed by a bot. Thank you, KidAd talk 23:31, 2 November 2020 (UTC)
_+
replace with _
(where _ is a space), but that does require human review a lot of the time.
Headbomb {
t ·
c ·
p ·
b} 00:50, 3 November 2020 (UTC)
<pre>...</pre>
element, where an element is styled with the declaration white-space:pre;
. So opening up an edit to reduce those spaces in the wikisource is simply a waste of time. --
Redrose64 🌹 (
talk) 21:04, 4 November 2020 (UTC)A football position article of
Guard (American and Canadian football) was moved on 27 August 2019 to
Guard (gridiron football) per
[105] with an edit summary of "moved page Guard (American and Canadian football) to Guard (gridiron football): to match Tackle (gridiron football position) and Center (gridiron football)"
AWB currently matches 1575 links to
Guard (American and Canadian football).
Can we search and replace to bypass redirects in two capitalization formats, like:
1. [[Guard (American and Canadian football)|Guard]] --> [[Guard (gridiron football)|Guard]] 2. [[Guard (American and Canadian football)|guard]] --> [[Guard (gridiron football)|guard]]
Any "missed" links/redirects should be few and I can manually (or AWB) correct them.
UW Dawgs (
talk) 20:58, 6 December 2020 (UTC)
According to quarry:query/49607, there are currently 2,585 articles tagged as Redirect-Class by at least one WikiProject that are not actually redirects, and 2,179 if you further exclude disambiguation pages. These incorrectly tagged articles are likely to receive less attention from the WikiProject as a result, and I can't imagine any good reason why a project would want to leave non-redirects tagged as Redirect-Class. Thus, two questions:
Vahurzpu ( talk) 17:57, 9 November 2020 (UTC)
{{
WikiProject Military history}}
, there should never be any need to explicitly set |class=redirect
(or equivalent), because all WikiProjects that provide Redirect-Class (other than Military history) also have code in their banners that will autodetect that a talk page is that of a redirect. So either altering it to the valueless form |class=
, or removing the parameter entirely, will both work. In my opinion, the first method is best for pages in the main Talk: space, since an explicit value (stub, start etc. will need to ba added later on; but the second method is more suited to all other talk spaces, becuse the namespace is autodetected so the page will be automatically placed in Template-Class or similar, as applicable.|class=rdr
, |class=red
, |class=redirect
(all case-insensitive) - it does not recognise |class=redir
that the others all allow. So for talk pages having {{
WikiProject Military history}}
and one of those three values for |class=
will need to be individually checked to see if the talk page is that of a redirect - if it is, the value in |class=
will need to be left alone. --
Redrose64 🌹 (
talk) 11:08, 10 November 2020 (UTC)
{{ resolved}} This is a request for a mass undo of about 200 messages delivered at 18:43 and 18:44, 8 December 2020 (UTC time) by the MediaWiki message delivery service. See its recent contributions and Special:Log/massmessage. I queued a message for delivery about 15 hours before that time stamp, and all messages were delivered, and then through some apparent hiccup, a subset of editors received the message again, 15 hours later.
I don't know of an easy way to undo those 200 edits (194 to be precise, I believe). Is there someone with some sort of script/bot/privilege who is able to quickly and easily undo them? Thanks in advance. – Jonesey95 ( talk) 19:29, 8 December 2020 (UTC)
Related discussions:
A change back in June to the introduction shown to new users has resulted in new pages being created as subpages of Draft:Sample page when users complete the introduction without logging in (see Special:PrefixIndex/Draft:Sample_page). These are essentially individualized sandboxes, and should be routinely deleted - they're test pages by definition so WP:G2 applies, and they often contain material that qualifies for deletion under other speedy criteria. Can someone code an adminbot that will look for these subpages and delete them, maybe if they have not been edited in a few days? Ivanvector ( Talk/ Edits) 14:45, 30 September 2020 (UTC)
A bot would be useful on Wikipedia:Translators available for the sorting the lists of users, in each section, by the date of last edit (descending). I'd suggest running the task monthly. – SD0001 ( talk) 12:19, 28 September 2020 (UTC)
Is there a bot that adds the {{DEFAULTSORT}}
magic word to articles that need it but don't have it? I have a list of over 1k television "List of episodes" articles that don't have DEFAULTSORT. Cheers. -- /
Alex/
21 09:52, 2 October 2020 (UTC)
[[Category:Star Trek: Enterprise episodes| ]]
[[Category:Star Trek episode lists|Enterprise episodes]]
[[Category:Lists of American science fiction television series episodes|Star Trek: Enterprise]]
There are three main cases to consider:
Note that the lists and categories in question might not use identical phrasing for some strange "local consensus" reason, so determining which case applies probably wouldn't be a good bot task. ― cobaltcigs 08:28, 17 October 2020 (UTC)
I was going to copy all films from the American television films category into the American films category (using Cat-a-lot), because the template on the American films category specifically tells editors to do this. However, an administrator objected because he did not want his watchlist to be full of hundreds of minor edits. He then requested that I get a bot to transfer articles from non-diffusing subcategories into the appropriate parent categories. I have taken into account the fact that such a bot may transfer categories that were inappropriately placed, though. Is this still an acceptable proposal? Scorpions13256 ( talk) 20:31, 14 September 2020 (UTC)
@ Scorpions13256: Just to clarify, is this a request for all items in the sub-cats of American films, or just American television films? Mdann52 ( talk) 21:57, 19 October 2020 (UTC)
Hi all, there are about 650 articles which were previously
peer reviewed. However, because of article moves, the links to the reviews are now broken.
Category:Pages using Template:Old peer review with broken archive link. See for example
Talk:Battle of the Catalaunian Plains. I'm seeking bot help repairing the 650 links. Essentially, the bot will need to go through each article in that category, determine what the name of the article was when the peer review was closed, and then update the template {{
Old peer review}} on the current article talk page by adding |reviewedname=the old name
. Extra useful if the date can be found and inserted too (|date=date the review was closed
). --
Tom (LT) (
talk) 00:25, 19 September 2020 (UTC)
Hi. I would like to own a bot to give me assistance with reverting vandalism and warning users who have vandalised Wikipedia pages. I would like the bot to be called OverriddenBot, since my username is Overridden and that’s what I chose for the bot. Thank you. Overridden ( talk) 08:26, 19 December 2020 (UTC)
We have numerous articles with titles like Thomas Williams (Alabama), using the state alone as a disambiguator. These are, as it turns out, disfavored because the person who is the subject of the article is not an example of an Alabama. However, it's a pain in the ass to dig them out and fix them manually. What I would ideally like is a bot to find all biographical articles with titles that are Person's name (State) and replace them with Person's name (State profession) (in the above case, it would be Thomas Williams (Alabama politician)), and then update all incoming links to that as well. I recognize that this can be tricky, because many people have multiple professions and it may require a human eye to choose the best disambiguator, but I think there are some broad categories that can be done automatically. For example (again, as with Thomas Williams (Alabama)), anyone who has served in the United States Congress can almost certainly be disambiguated with "politician" for their profession. Since many of the issues will be with members of Congress auto-generated at these titles in the first place, that should handle a good number of them. BD2412 T 03:03, 13 October 2020 (UTC)
_
) instead of spaces. –
Majavah
talk ·
edits 17:31, 13 October 2020 (UTC)
Interstate 635 (Texas), Colorado River (Texas), Toyota Stadium (Texas), etc. are obviously not "examples of a Texas" either. Why are "state-only disambiguators" only "disfavored" for politicians' names? ― cobaltcigs 08:01, 17 October 2020 (UTC)
[[... (OCCUPATION born YEAR)]]
(plus or minus a comma, and unless their birth years are unknown), because non-politicians tend not to be strongly associated with a particular state.[[... (Bavaria politician)]]
and [[... (Baden-Württemberg politician)]]
are used anywhere—because we
assume readers know U.S. states but not German states. Falling back on the birth year is probably most common for them as well.Hi, I'm looking to get a bot to update the
Template:IMDb episodes links in TV season articles, by adding |season=x
to the link template with x being the season number, so the link will directly point to that respective season on IMDb; when |season=x
is not specified in the template, it just links to the most current season. I'm guessing the bot can just grab the season number from the article title? This would only need to be done for season articles; example,
Fargo (season 4) while the IMDb links for
Fargo (TV series) and
List of Fargo episodes can remain unchanged. Thanks.
Drovethrughosts (
talk) 16:26, 27 October 2020 (UTC)
An example: {{Imdb episodes|2802850|Fargo}} would be changed to {{Imdb episodes|2802850|Fargo|season=4}}
{{
IMDb title}}
like in
Twin Peaks (season 3) to use {{
IMDb episodes}}
.A common mistake is to type "Wikiproject" instead of "WikiProject" to get to pages like Template:WikiProject Physics or Wikipedia:WikiProject Physics. So a bot that would automatically create those would be really useful.
This should only be the base pages, not the subpages like Wikipedia:WikiProject Physics/Quality Control. Headbomb { t · c · p · b} 16:51, 31 July 2020 (UTC)
Category:Redirects tagged as disambiguation pages contains lots of talk pages of redirects which are incorrectly tagged with {{ WikiProject Disambiguation}} (or one of its 45 redirects). Please could a bot:
|class=disambig
from any other project banner on the page.Thank you — Martin ( MSGJ · talk) 17:55, 29 October 2020 (UTC)
we should keep banners on talk pages where the corresponding mainspace page is an actual dab– this is precisely where the banner is not needed. If the talk page has other content, the banner doesn't hurt. But if the banner is all that is there, then, as the template's documentation makes clear, the page shouldn't have been created in the first place. – Uanfala (talk) 20:22, 29 October 2020 (UTC)
Apparently this is a lot more complicated/controversial than I envisioned it would be, so I withdraw the request for now. If there is a reliable way of determining whether the target of these redirects is a dab page then I may return. — Martin ( MSGJ · talk) 08:07, 30 October 2020 (UTC)
This task should be relatively simple. Find all cases of redirects like
and mark them with {{ R to diacritics}}
Then find all cases of redirects like
and mark them with {{ R from diacritics}}
Obviously pages that are already tagged should be skipped. Headbomb { t · c · p · b} 03:45, 17 September 2020 (UTC)
A common PMR request is to delete redirects following draftification. I believe this is covered under WP:R2. See Quarry - we only have 6 of these pages currently, so they do usually get suppressed or tagged. I imagine it's been discussed before, but I couldn't find it in BOTREQ archives: why, instead of a lot of manual deletions and PMR reqs, does a simple bot (well, adminbot) not just delete these auto after a bit of time elapses? With a basic check to ensure the redirect has no real history. ProcrastinatingReader ( talk) 09:01, 2 November 2020 (UTC)
after a bit of time elapsespart. When vandals draftify articles, those drafts are reverted in minutes/hours usually by patrollers. So if there's a 12 hour delay I don't think it could be abused? ProcrastinatingReader ( talk) 11:06, 2 November 2020 (UTC)
ensure the redirect has no history(I would have the threshold very low, as maybe 2 revisions or less). I don't see how you could then use the bot to vandalise; if you move an article to draftspace (and the redirect gets deleted), the article can just get moved back; if you just replace an article with a redirect to draftspace, it does not get deleted because it has some history. Seems like a lose-lose (for the vandal) situation. WT79 ( speak to me | editing patterns | what I been doing) 16:15, 2 November 2020 (UTC)
I don't see any vandalism problem.Example: user moves a low-watched but perfectly satisfactory page to the draft space, which is then R2'd by the bot. The page was written five years ago and all of the original editors are inactive. Thus, no one notices, and it's deleted after six months per WP:G13. I can think of a half-dozen others, but per WP:BEANS I'll just stick with the most obvious. Primefac ( talk) 17:18, 2 November 2020 (UTC)
Some years ago, I edited a number of articles about the Civil rights movement. I cited Civil Rights Movement Veterans ( https://www.crmvet.org) as an information source. Other Wiki editors also cited that website as a source in their articles and edits. Last year it changed its name to "Civil Rights Movement Archive," but all their URLs remained the same.
By hand, I edited the Civil rights movement article to change the name of the source from "Veterans" to "Archive." My computer skills are primitive and it was far too time consuming. I did a Wikipedia search for "Civil Rights Movement Veterans" (note quotes) which returned 108 wiki articles.
Could someone run a bot to automatically change "Civil Rights Movement Veterans" references to "Civil Rights Movement Archive" (leaving all URLs unchanged)?
Thanks
Brucehartford ( talk) 18:31, 15 November 2020 (UTC)
No doubt you're correct. Unfortunately, my computer skills are not up to AWB. I looked at it, and it was beyond me. Thanks for checking into it though. Brucehartford ( talk) 00:15, 18 November 2020 (UTC)
This is a fairly self-explanatory task; I checked with AnomieBOT, which currently does most template substing, a few days ago, but it was deemed too difficult to determine which transclusions were in section headers. See the template's documentation for details on why substitution is necessary. WT79 ( speak to me | editing patterns | what I been doing) 16:52, 6 November 2020 (UTC)
/* {{anchor|Foo}}Bar */ [rest of summary]
, even though Foo is normally invisible to readers, and the entire string, with curly brackets, does not work as a section link. This is not generated by the raw html, which is ignored by the summary generator. We need to clean up these existing uses.
WT79 (
speak to me |
editing patterns |
what I been doing) 21:42, 6 November 2020 (UTC)
There are about a thousand articles that contain the parameter
Ship sail plan = Full rigged ship
but should be
Ship sail plan = Full-rigged ship
according to well-known dictionaries and common understanding of compound modifiers. About a hundred or so are unlinked, and it wouldn't hurt to link them while we're at it. There may or may not be spaces on either side of the equals sign. Chris the speller yack 17:37, 13 November 2020 (UTC)
When someone changes a section name, there's no indication that someone else somewhere on Wikipedia might have created a link to that section that will be broken by the name change. I occasionally come across instances of such broken anchor links. Is there any bot patrolling for this and changing links (or, if that would be disruptive in some cases, adding an {{ anchor}} to the destination page)? If not, I'd think we'd want to set that up. {{u| Sdkb}} talk 05:53, 20 September 2020 (UTC)
I recently remarked on the discord that people fairly frequently misspell my username, sometimes resulting in missed pings, and several others chimed in that they have the same or a similar issue. It would be nice to have a bot that could work off a whitelist of common misspellings of usernames, and fix them/ping the editor. We'd probably want a little oversight of the list to prevent abuse, but otherwise it'd hopefully be pretty straightforward. We might have it append some smalltext, similar to {{ Unsigned}}. {{u| Sdkb}} talk 21:55, 9 November 2020 (UTC)
Hi! An editor probably tried to mention you (link to diff) on page (link), but misspelled your account name. (Sent in error? Report here.) Ovinus ( talk) 07:17, 10 November 2020 (UTC)
Correcting misspelled username of C0mpl1c8tD NamE on behalf of BadSpellr (report error)would be sufficient. That would get around any trickiness with pings of multiple users, etc., since it wouldn't append any smalltext.
Following discussion in the first half of 2020, Template:Infobox dog breed underwent a minor redesign to reduce the focus on kennel clubs from English speaking countries [108]. As a result a number of parameters were deprecated but remain in many of the 627 transclusions. It is requested that a bot be tasked to remove the following deprecated parameters from these pages:
| patronage = | fcigroup = | fcisection = | fcinum = | akcgroup = | akcstd = | akcstd1 = | akcstd2 = | akcfss = | akcmisc = | ankcgroup = | ankcstd = | ankcstd1 = | ankcstd2 = | ckcgroup = | ckcstd = | ckcstd1 = | ckcstd2 = | ckcmisc = | kcukgroup = | kcukstd = | kcukstd1 = | kcukstd2 = | nzkcgroup = | nzkcstd = | nzkcstd1 = | nzkcstd2 = | ukcgroup = | ukcstd = | ukcstd1 = | ukcstd2 = | otherstd =
This is not a war stopper, but it may cause some confusion for unknowing editors in the future. Kind regards, Cavalryman ( talk) 22:46, 7 January 2021 (UTC).
Hi all, I was wondering if anyone was interested in developing a script for talk pages to automatically role templates like DYK, GA and PR into an {{ ArticleHistory}} format? I occasionally Wikignome, and it occurs to me such a script would likely be very useful for myself and many other editors, by automating a fairly time consuming manual process. The benefits will be more readable and organised talk pages, as well as a more comprehensive history for some articles. What a noble goal! -- Tom (LT) ( talk) 06:44, 27 July 2020 (UTC)
I posted at village pump earlier and didn't get any responses, so I assume such a script doesn't exist, therefore I thought I'd ask here :). -- Tom (LT) ( talk) 06:44, 27 July 2020 (UTC)
@ Hawkeye7 could FACBot or MilHist bot be customised to run on new good articles or peer reviews every so often? -- Tom (LT) ( talk) 04:42, 19 September 2020 (UTC)
Sorry, I was just made aware of this discussion: please see Taming talk clutter from 2008 and read this discussion at Village Pump Technical.
Gimmetrow and Dr Pda designed the article milestones. Gimmetrow's old bot (Gimmebot) rolled EVERY content review template into the article milestones, so it can be done-- that is GA, PR, FA, everything. But more, he ordered the events logically and sensibly, and I have been going through and trying to fix at least the October FAs, since a) all templates are no longer rolled in by bot, b) some GA passes use faulty templates, c) many DYK noms do not identify the nom page, d) some processes are not providing oldids, and e) OTD is off doing their own thing, dumping clutter on to talk pages outside of the article milestones.
No, oldid is not REQUIRED for proper display, but neither is it hard to find. Dr pda also used to have a script that returned an oldid based on any timestamp. ALL OF THIS was accomplished more than a decade ago, so I'm sure it can be now. And the point of the milestones is to always be able to click back on any date and see what the article looked like at the time of that event.
GimmeBot processed every GA and every FA and every PR. If any one is going to take this on, please try to return sensible ordering of the milestones as they used to be and as I have been correcting them, eg, here. Separate each event, in order, and put the rest of the important stuff at the bottom. And get OTD and ITN on board, and figure out why DYK isn't providing nom pages. Happy to help if someone is going to take this on; as of now, I am repairing all FACs and FARs manually. See my contribs. SandyGeorgia ( Talk) 21:19, 1 November 2020 (UTC)
Enterprisey starting over with a brain dump of everything I know that is going wrong with Template:Article history, and things that might be done to fix the issues. Historically, when GimmeBot was doing everything, there was very little manual intervention. Since the demise of GimmeBot, we have different processes going different ways, nothing standardized, and some editors intervening manually and causing errors. This will be partially an exercise in getting everyone back on the same page.
In no particular order of priority:
I will add to this list as I recall other things ... SandyGeorgia ( Talk) 13:38, 2 November 2020 (UTC)
From user talk:SD0001:
... would there be a way to do a bot report of pending articles that have titles identical to titles on other Wikipedias, with links to those foreign-language Wikipedia pages? If an article exists on another Wikipedia, it's a good indication that the draft should be approved. Thanks, Calliopejen1 (talk) 18:19 pm, 11 November 2020 (UTC)
I don't think using wikidata is an option since AFC drafts are very unlikely to have been linked to wikidata. Is there another way this could be done? – SD0001 ( talk) 06:33, 13 November 2020 (UTC)
Using a SQL query this would be a cross database join. My bot "shadows" does something similar where it looks for File:'s that have the same name on Commons and Enwiki. It's pretty fast and Commons has 60 million File: pages, which exceeds the total of all mainspace pages in all wikis by a fair amount, it is fast. The problem is Wikitech is redesigning the SQL servers and cross database joins will soon no longer be available. A Phab is open to try and find a solution. If you would like to follow developments see Phab T267992. -- Green C 19:03, 17 November 2020 (UTC)
I come across way too many article talks, like Talk:Jennifer Lawrence, where the {{ Archives}} causes that ugly overlap. It happens whenever the template isn't at the bottom of the list of talk banners (view source to see what I mean). To fix, we'd need a continuous bot to make sure this template keeps getting moved to the bottom of talk page banners. I don't think a CSS fix is really possible for this, and a JS fix would not be preferable to just having a bot maintain talk pages. I've made a discussion on the talk last week, see Redrose's response there for useful info as well (perhaps a broader bot for that purpose should be considered). It reminds me that another issue we see is DS templates constantly in the wrong order, it's advised by the template itself, and WP:TALKORDER, to have them below the talk header. Yet they seem to be scattered randomly. We commonly have random whitespace in talk page banners, too, thus random newlines. Really a bot to clean all this up would be a good idea, and enforce order (except when opted-out, I suppose). ProcrastinatingReader ( talk) 22:04, 2 September 2020 (UTC)
Have a bot remove a user from the category Category:Wikipedia usernames with possible policy issues when they have been inactive for over one year or have been blocked indefinitely. Heart (talk) 03:15, 9 October 2020 (UTC)
Hey I need a simple bot that could be able to add words to the links I send it. Maybe have the option where to add the text, but also have an option to remove all the text that you put in the bot once it comes across one of the words on the links. Might've not expressed myself the best but I hope you guys got my message. — Preceding unsigned comment added by JokerLow ( talk • contribs) 23:51, 5 January 2021 (UTC)
Hello, I'm here for requesting a bot to make an article alert page for WP:WILDFIRE wikiproject, like [[WP:CALI] and WP:USA does. --🔥 Lightning Complex Fire🔥 17:51, 8 January 2021 (UTC)
Per MOS:REFPUNCT, citations are supposed to go after punctuation like periods and commas, not before it. This is already included in GENFIXes, but I think it's noticeable enough to readers that it'd be good to have a bot working on it; it's not really WP:COSMETICBOT to my reading. Yobot has an approved task for doing this, but given how many pages I've come across with this issue, I'm guessing it's no longer working. {{u| Sdkb}} talk 20:29, 10 January 2021 (UTC)
I resumed the bot task. If there is any problem, please report it immediately. -- Magioladitis ( talk) 09:44, 14 January 2021 (UTC)
The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
Sansoni (publisher) is an old and important Italian publisher, whose page was recently created.
There are hundreds of pages with Cite book templates for works published by Sansoni.
It would be useful to link them to the publisher page.
So my proposal is that a bot should look for instances of {{ Cite book }} where there is one of these parameters:
|publisher=G. C. Sansoni |publisher=G.C. Sansoni |publisher=Sansoni
And replace it respectively with:
|publisher=[[Sansoni (publisher)|G.C. Sansoni]] |publisher=[[Sansoni (publisher)|G.C. Sansoni]] |publisher=[[Sansoni (publisher)|Sansoni]]
The replace should only be done on the first instance in each page, of course, to avoid excessive wikilinks.
Thank you in advance!
-- Lou Crazy ( talk) 02:21, 14 January 2021 (UTC)
Please remove all files in this category, because it's not necessary (all files in this category are out of copyright since this year). 185.172.241.184 ( talk) 09:24, 15 January 2021 (UTC)
I've been doing this by hand today and I thought maybe a bot could? To avoid complications we could start doing this with a category that has ~100 articles and check for mistakes.
More info here: /info/en/?search=Wikipedia:Writing_about_women
Samiwamy ( talk) 18:35, 18 January 2021 (UTC)
Thanks everyone for the info
Samiwamy (
talk) 18:59, 18 January 2021 (UTC)
The Torah obligates a man to not deprive his wife of food. Hume Cronyn appeared
alongside Jessica Tandy, his wife of over fifty years. More controversially, some people actually are notable mainly for being the wife/husband/son/mother/whatever of someone more famous. Certes ( talk) 19:24, 18 January 2021 (UTC)
Prince Bernhard of Lippe-Biesterfeld, husband of Queen Juliana of the Netherlands, unveiled the [ Statue of Maria van Riebeeckis correct to imply that the prince is mainly notable for being married to the better known queen. Would "married to" be an improvement there? Certes ( talk) 22:41, 18 January 2021 (UTC)
I have seen many users who have been blocked indefinitely for various reasons (socking, disruptive editing, CIR, and what not), but they receive many newsletters, and other notifications. Currently, there is
User:Yapperbot/Pruner to remove inactive users from lists (WikiProject membership, FRS, etc), notifying the removed users appropriately.
I am not sure what is the extent of this task. Would it be feasible to spend resources on creating a bot task to add {{
nobots}}, and "category:wikipedians who opt out of message delivery" on the talkpages of users who have been blocked indefinitely, and do not have {{
unblock}} on talkpage for more than 30 days? That way, resources can be conserved by avoiding new bot messages being posted, and later being archived. In case the user returns after a while, or after standard offer, they can simply remove the "nobots", and the category. Opinions are welcome. Regards, —usernamekiran
(talk) 13:22, 15 September 2020 (UTC)
My specific need is to find all talk pages with the following six tags and remove all six of them. I would think that, if the "table" mechanism is generalized, then it could be used by others, so my preference would be a BOT named FindAllTheseTags_ThenRemoveAll (long name, but more descriptive than FindALLremoveALL).
My list of tags is:
To ensure clarity of the spec: Only Talk pages with ALL SIX are to be fixed.
The reason for this BOT is to counteract the still-existing after-effects of a BOT that, back in 2008, tagged talk pages with THE ABOVE SIX tags. My BOTREQ request is on the basis of my HelpDesk request, which directed me here. Pi314m ( talk) 12:33, 24 January 2021 (UTC)
This discussion has now progressed to Template talk:WikiProject Food and drink#2008 hangover: six tags, 15,000 cases. -- Redrose64 🌹 ( talk) 16:23, 25 January 2021 (UTC)
I frequently find sections with "unreferenced" tags that do have references, like this one. Is there a bot that can replace these tags with {{ refimprove}}? Jarble ( talk) 19:54, 22 January 2021 (UTC)
The bot would scan recent reverts and inspect the page history. It will then analyse the number of reverts against pre-set thresholds. If one of these thresholds are met, it files an automatic report to WP:RPP requesting page protection.
Example thresholds could be:
I have no programming experience with Wikipedia so unfortunately I won't be able to program this. Eyebeller ( talk) 07:59, 18 November 2020 (UTC)
Hello! I posted a comment over at the Village Pump and was directed here, so I'll copy here:
I think it'd be cool if a bot could be designed to add Talk page notifications when the subject's article is promoted to Quality status at another Wikipedia project. To pick an artbitrary example, a notification could have been added to en:Talk:G.U.Y. when hu:G.U.Y. was promoted to quality status.
Added benefits could be editors comparing different language versions, encouraging translation efforts, and more editors becoming familiar with Wikidata, depending on the notification's text and bot design. I could also see notifications being posted to WikiProject talk pages, etc.
Thoughts? Concerns? Other feedback? Sorry if this idea has been brought to the table before. --- Another Believer ( Talk) 15:38, 24 November 2020 (UTC)
I think this is technically difficult to do using a bot. Only reasonable approach I can think of is if we knew the name of the GA template on a given wiki (given that, although we use Legobot, other wikis probably do it manually with differently named templates), we could patrol its recent changes, check for addition of template, and then lookup Wikidata link to find the enwiki article and add a talk page message. Otherwise, this is probably better as a userscript with some kind of "Check other wikis for GA status" button in the toolbar. ProcrastinatingReader ( talk) 12:04, 5 December 2020 (UTC)
When uploading images to Wikimedia Commons, I often notice that there are no category redirects for the common names of most species, so there are too many redirects that need to be created manually. Is there a bot that could create these missing redirect pages, using data from Wikispecies or WikiData? For example: commons:Category:Red fox is {{category redirect|Vulpes vulpes}}. Jarble ( talk) 18:23, 10 December 2020 (UTC)
AT Wikipedia talk:Moving a page#Updating archive bot settings when moving a page you can learn PrimeHunter has recently created Category:Pages where archive parameter is not a subpage, and that by far the biggest reason for pages to end up there is that Wiki editors move pages without updating talk page archival bot instructions.
But why should humans have to do menial tasks like that at all?
I assume when the bots were created there were no real standards and practices regarding auto archiving, but now there is. Seems to me we can avoid needless administration (and a lot of pages that don't archive properly) if we change the code of the two main archival bots to assume the standard naming as the default. If the |archive=User talk:Example/Archive %(counter)d
parameter (Lowercase Sigmabot III) and the |archiveprefix=User talk:Example/Archive
(ClueBot III) parameters could be made optional we could remove them from the standard instructions while still allowing manual override for the (few) cases where it's needed. This should mean that moving a page would no longer break auto archiving.
Of course, if there were a good reason this wasn't implemented back when, feel free to enlighten your audience :) CapnZapp ( talk) 09:59, 7 January 2021 (UTC)
|archive=
(minus the subpage) is a redirect, and if it has any subpages matching the subpage pattern that are non-redirects. So in that way it could be automated. For ones that don't meet the criteria, it's likely post-move cleanup is needed and it could build a report.
ProcrastinatingReader (
talk) 13:17, 7 January 2021 (UTC)
|archive=
optional. A bot cannot check a parameter if it isn't there. It would have to look for moves in those cases. Moves aren't logged at the target name so it would have to examine the page history or incoming redirects. If somebody copy-pastes the talk page instead of moving then there might be no trace. Not demanding a subpage name will also increase the number of poor archive parameters when somebody copies the archive parameters from a random page with very different activity.
PrimeHunter (
talk) 22:39, 7 January 2021 (UTC)Thank you all for your consideration so far, @ PrimeHunter, Primefac, Redrose64, and ProcrastinatingReader: Are you saying the occasional "overarchiving" (or whatever you feel is an appropriate title for the issue you have brought up) is deemed more disruptive than the (presumably) much larger load on human administration? That a big reason the bot writers mandated the archive name was so nothing was ever archived in the wrong place, even though it added a workload on humans that (from the layperson's perspective) is unnecessary? Perhaps a suggestion of this nature has been discussed previously? Cheers PS. If this place is the wrong venue for taking a holistic approach and here discussion should be limited to only unproblematic suggestions, please direct me to a more appropriate venue and thank you for your time. CapnZapp ( talk) 10:29, 8 January 2021 (UTC)
The website airdisaster.com appears to be used in several articles about aviation accidents, but now links to a spam site/domain hoarder, which seems very undesirable for users. Can someone get the direct links removed and where possible linked to an archived page? In particular where it is linked as an external link, occurrences in references appear to be fixed already Pieceofmetalwork ( talk) 16:07, 9 January 2021 (UTC)
It's done. Example edits: [112] [113] [114] [115], etc.. -- Green C 03:16, 4 February 2021 (UTC)
When you upload an image and choose the option on the list that it is a book cover, it adds book cover to the Licensing section, but you then have to manually add two things to the Summary. It should automatically make Use = Infobox since there is no possible chance there would be anything else. The other required field is Article, which you could easily see which article it was just placed in, and if none found then have a message reminding people to add one. Dream Focus 14:23, 14 February 2021 (UTC)
The following pairs of cleanup templates:
should not be used on the same article; but often are.
We need a bot, please, to remove first template in each of the pairs named above.
The bot should not do this when the templates are section-specific (e.g. {{One source|section|date=October 2020}}
)
The bot should remove {{ Multiple issues}}, where appropriate.
The bot needs to take into account common redirects (for example, {{ More citations needed}} is often used via {{ Refimprove}}; {{ More footnotes needed}} as {{ More footnotes}}, etc.).
This can be done as a one-off and then either run occasionally, or added to one of the regular clean-up tasks.
Other such pairs might be identified in future.
Prior discussion is here . Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 20:51, 21 October 2020 (UTC)
needs more, or better references, and not necessarily in-line ones, then to say that there is
a need for more in-line referencing based on the article's existing sourcesis superfluous, as the articles existing sources have been tagged as insufficient. WT79 ( speak to me | editing patterns | what I been doing) 16:23, 2 November 2020 (UTC)
Two related proposals on the Community Wishlist survey have been rejected as out of scope, so I am putting this note here in case there is anyone interested in taking on a project to keep Wikipedia pages and categories up to date.
Basically, pages on Wikipedia are not refreshed often enough, which means that it can take weeks, months, or longer for category membership to update, or for things like age calculation in infoboxes to work correctly.
When a change is made to a template or module that involves category membership, pages that transclude that template or module require a null edit in order to update their category membership. Because of delays in the job queue, such category membership changes can take weeks, or even months. Even worse, changes to the underlying MediaWiki software that apply categories (e.g. those in Special:TrackingCategories) do not force pages into the job queue, which means that category membership for affected pages can take months, years, or forever.
These delays cause outdated information, missing information, and outright errors to be rendered for readers, and cause editors who are working on fixing problems identified by maintenance categories to be delayed in applying those fixes. When a maintenance category should be populated but is empty, it gives editors the false impression that all affected articles are working properly.
One proposed solution/workaround is to set up a background process that tracks all pages based on their last edit time stamp, including null edits. That tracking could be used to make a list of needed null edits for "stale" pages. There is some detail in the phab links below about how to generate such lists and (possibly) how to force pages into the job queue so that a null-edit bot might not be needed.
For details and links to phabricator tickets, see meta:Community Wishlist Survey 2021/Archive/Set maximum delay in updating category membership and meta:Community Wishlist Survey 2021/Archive/Correct wrong tenure lengths. (Actually, I'll just put the phab links here: T132467, T135964, T157670, T159512.) – Jonesey95 ( talk) 16:34, 7 December 2020 (UTC)
select count(*), SUBSTR(page_links_updated, 1,6) from page group by SUBSTR(page_links_updated, 1,6) order by SUBSTR(page_links_updated, 1,6) desc;
), and probably some variations on it, including that same query limited to article and template space. If we could get a reasonable list of the stalest articles and templates, a bot could null-edit them systematically. –
Jonesey95 (
talk) 16:35, 11 December 2020 (UTC)
page_links_updated IS NULL
and page_touched
is old. They won't have been re-parsed since creation. Unfortunately, the page
table does not seem to be indexed on those columns and I don't see a relevant alternative view.
Certes (
talk) 17:16, 11 December 2020 (UTC)Hello, the Illinois Historic Preservation Agency recently took down their website because it was based on Adobe Flash, breaking lots of links of the format http://gis.hpa.state.il.us/pdfs/XXXXXX.pdf (where X represents a numeral). I just checked a random one, and it was in IA, so the archive bots could run with these URLs, but how do I ask that they work on them? Nyttend ( talk) 13:12, 16 February 2021 (UTC)
We have many articles that have a disambiguated title that are not linked to from a hatnote and are not listed on a disambiguation page. Either editors forgot to add the page to the disambiguation page, or the hatnote was removed in an act of vandalism. Sourdough, Montana (created in 2009) was not accessible from the base title Sourdough until Sourdough (disambiguation) was created in 2020; Drought (disambiguation) was inaccessible from 2018 to 2020.
I'm wondering if this is something that would be worth keeping an eye on by periodically assembling a list. I have no idea if such a list would be too large for anyone to want to go through, maybe an invisible tag similar to {{ orphan}} could be added to these articles?
– Thjarkur (talk) 12:39, 19 January 2021 (UTC)
One of the things I like to do is make infoboxes compliant with
MOS:SMALL and
MOS:POINTS using AWB. For example,
[116] and
[117]. The SMALL fixes are easy, for html tags I just find <small>
and </small>
and leave the "replace with" window blank. For {{
small}} and {{
midsize}}, I use regex. Find ({{small\|)(.*?)(}})
and replace with $2
.
The MOS:POINTS are a bit more challenging. I basically hard-coded a bunch of find and replace rules using regex for common degrees. This way, it doesn't matter if it's typed as "M.B.A." or "M. B. A.", it'll still get changed to MBA.
The problem with AWB is it's not versatile enough for me, at least for my rudimentary skills. For example, in order to limit the find-and-replace to infoboxes, I set the rule as "inside templates", so I still have to make sure it doesn't make any changes to URLs in any of the CS1 templates. Another issue is related to my regex for PhD and PhB. For PhD: (P)(\.?)(\s?)(h)(\.?)(\s?)(d)(\.?)
. This means though that in the infobox for
Marcel Lettre, "
Joseph D. Kernan" becomes "
JosePhD Kernan". I'd like for this task to be done by a bot so that I can make other edits and not have to waste time making sure that these issues don't come up.
Bait30
Talk 2 me pls? 01:49, 22 January 2021 (UTC)
|name=
or similar parameters. –
Jonesey95 (
talk) 16:30, 22 January 2021 (UTC)
|education=
anyways.
Bait30
Talk 2 me pls? 21:28, 22 January 2021 (UTC)
|native_name=
parameter; because |native_name=
is rendered larger than the normal infobox text, the text inside the {{
small}} template ends up rendered at 93.5% of normal, which is perfectly fine and should not be enlarged. –
Jonesey95 (
talk) 22:32, 22 January 2021 (UTC)I request a bot that shall replace all Wikipedia links and the logo with their respective Uncyclopedia links. April Fools! Wikitrumpets ( talk) 04:10, 1 April 2021 (UTC)
I remember seeing this idea mentioned once, but for some reason it was never coded. Let's be honest here, this bot will be pretty accurate. April Fools! Pahunkat ( talk) 07:54, 1 April 2021 (UTC)
This section contains material that is kept because it is considered
humorous. Such material is not meant to be taken seriously. |
The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
Per
WP:Most ideas are bad, most ideas are bad. But editors often forget this. Therefore, I propose a bot to remind them. This bot would use the latest advances in neural network language processing to automatically detect when someone is proposing an idea. It would then leave a message on their talk page something along the lines of "Hi! I'm
User:BadIdeasBot. I noticed that you recently suggested an idea. Please remember that most ideas are bad. On the off chance that your idea is not bad, please disregard this message. Thank you.
" What do you all think? Surely this is idea is one of the good ones, right? - {{u|
Sdkb}}
talk 00:24, 1 April 2021 (UTC)
April Fools!
use the latest advances in neural network language processing. lol. ProcrastinatingReader ( talk) 12:47, 1 April 2021 (UTC)
This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 75 | ← | Archive 79 | Archive 80 | Archive 81 | Archive 82 | Archive 83 | → | Archive 85 |
Can someone make a bot to automatically update the Wikipedia:List of Wikipedians by number of DYKs. Just like Wikipedia:List of Wikipedians by featured list nominations and Wikipedia:List of Wikipedians by featured article nominations. Thanks. ~~ CAPTAIN MEDUSA talk 18:37, 15 June 2020 (UTC)
Coding... - I'm just making the script to get the data. Once that's working I'll look at making the bot to update the table Pi (Talk to me!) 17:53, 22 June 2020 (UTC)
Hi. Is there a bot which can monitor a category, such as the category that {{ helpme}} requests are added to, and leave notifications of each new addition on-wiki at a specified target page, such as my personal talk page? Just checking as I am looking for something similar for AfC WikiProject, and I suspect that it might be already implemented. Prior discussion one, prior discussion two. Can look at implementing it in Python or Nodejs or Perl, but I hope that perhaps there is an existing bot for such a task. Thank you in advance for your advice. -- Gryllida ( talk) 05:13, 29 June 2020 (UTC)
Hi all, hope you are well in this crazy time period. I am seeking a bot that will:
This is per the discussion here: Wikipedia_talk:WikiProject_Medicine#Proposal_to_remove_ICD_codes_from_templates, essentially the reasons being that they clutter the titles and don't help editors.
An example of this would be here:
The codes are: "C44.L40–L68/D23.L15–49, 173/216"
; each is linked to a respective ICD9 and ICD10 category; Wikidata would need to be updated and then these removed from the title. We did this a few years ago within the Anatomy space; ping to
Nihlus who was very helpful then. Please let me know if there's any additional information that I can provide to help. Many thanks, --
Tom (LT) (
talk) 23:43, 15 July 2020 (UTC)
C44.L40–L68/D23.L15–49, 173/216must be added to skin cancer (Q192102), right? and/or to WD-items listed in this navbox, like papillary eccrine adenoma (Q7132983)?)
Please consider this request to be suspended / closed until I get the Wikidata component sorted. Many thanks -- Tom (LT) ( talk) 03:14, 17 July 2020 (UTC)
Template:Date is supposed to be used only in templates but there's more than a few used otherwise. A simple "subst" won't work as a significant portion of the uses are inside <ref> where subst does not work.
A bot to process these would be appreciated. -- Izno ( talk) 00:15, 4 July 2020 (UTC)
Since this bot it down, there a request to replace one of its functions at Wikipedia:Bots/Requests for approval/ProcBot 3. However, there is a second task it does: updating Wikipedia:Sockpuppet investigations/Cases/Overview. We may need someone to create a bot to fill that function. Ping Amalthea, ProcrastinatingReader and Xaosflux --- C& C ( Coffeeandcrumbs) 14:17, 22 July 2020 (UTC)
Hey geniuses, I was looking at this version of Bigg Boss Tamil 3 and noted that
| first_aired = 23 June 2019
| last_aired = 6 October 2019
was problematic, because these dates should be properly formatted for Template:Infobox television. So I wondered if there was a bot that could look at these parameters, then look to see if there is one of the {{ Use DMY dates}} or {{ Use MDY dates}} templates on the page, and adjust accordingly, with a result of:
| first_aired = {{Start date|df=y|2019|06|23}}
| last_aired = {{End date|df=y|2019|10|06}}
or
| first_aired = {{Start date|2019|06|23}}
| last_aired = {{End date|2019|10|06}}
Depending on whatever date format it finds.
Also, could this be incorporated into an existing bot? Don't we have maintenance bots that could be looking for stuff like this?
Thanks! Cyphoidbomb ( talk) 18:44, 6 July 2020 (UTC)
{{#ifexpr: {{Str find|{{lc:{{{first_aired|}}}}}|start date}} > 1 | [[Category:Pages using infobox television with nonstandard dates]]}}
with a similar tracker for end date as well.
Primefac (
talk) 00:40, 20 July 2020 (UTC)
{{Str find|{{lc:{{{first_aired|}}}}}|may}}}}
seems to return "1" for example, which makes me think |first_aired=
is already passed through the start date template by the time it's evaluated here. Will read through some docs.
ProcrastinatingReader (
talk) 17:05, 20 July 2020 (UTC)
Yeah, you're right,
Gonnym, I didn't realize that it would parse the {{
start date}} template before it hit the infobox call. In that case, you'll be wanting {{#if:{{{first_aired|}}}|{{#ifexpr: {{Str find|{{{first_aired|}}}|dtstart }} < 1 | [[Category:Pages using infobox television with nonstandard dates]]}}}}
and using dtend
for the end date. Really nice, actually because it means that you don't have to worry about template redirects. I've tested it in the sandbox and it looks good to me, but if someone else wants to run it through the paces before we go live let me know.
Primefac (
talk) 21:42, 20 July 2020 (UTC)
|first_aired
param.
Primefac (
talk) 21:57, 20 July 2020 (UTC)
This is now Done by User:ProcBot. Archiving. ProcrastinatingReader ( talk) 16:42, 9 September 2020 (UTC)
The links of sources for the land uses of the municipalities (within the geography section) in switzerland point to a web page that is no longer available (e.g. Bulle) and only some of them have been linked to wayback machine. Can someone link the rest of them to wayback machine?-- Horizon Sunset ( talk) 17:44, 22 July 2020 (UTC)
Can I get technical support for creating DetectiveBot, Nihaal The Wikipedian ( talk) 13:07, 2 September 2020 (UTC)
Primefac I want this bot to be at least 1.5x faster than ClueBot NG. Detect,revert,report and also block when needed . This bot is very likely to have false positives too, so help might be needed. Nihaal The Wikipedian ( talk) 05:42, 3 September 2020 (UTC)
@ Redrose64 and Primefac:. Then there is messaging bot which helps people properly ping and send messages to people. Free for everyone, a simple tool. Nihaal The Wikipedian ( talk) 05:41, 4 September 2020 (UTC)
I need help for that . My idea . Nihaal 03:48, 9 September 2020 (UTC)
Look at the two most recent redirects that I have created. There were communities at the name with the state disambiguator, but the base name was a redlink. Is there any way to do what I just did for every article that is in the form of "[anything], [state/province/country]" with a corresponding redlink? It would mostly need to run only once, but it could run again for a minor update every 3 months or so. HotdogPi 11:20, 20 July 2020 (UTC)
Query 46704 lists all of the talk namespace redirects that point to an article. Usually, if "A" redirects to "B" and "Talk:A" is also a redirect, then "Talk:A" should redirect to "Talk:B", not "B".
So, I think that we should have a bot that lists all of the talk page to mainspace redirects on a single page (perhaps a user subpage for the bot, or a " database reports subpage"). After that, the bot will find all of the redirects that do not include a slash (slashes indicate subpages), and fix them to point to the talk page of the mainspace target instead. If "Talk:A" happens to redirect to "A", then the (admin)bot would delete "Talk:A" because otherwise, it would redirect to itself. There are currently 1292 talk namespace redirects that point to articles (plus possibly some more due to a database replication lag). GeoffreyT2000 ( talk) 20:46, 30 July 2020 (UTC)
I see a lot of users like BD2412 going around a just blanking IP talk pages with the {{ OW}} template and I thought "Boy isn't that a tedious job" and then I thought "well, let's get a bot to do that". Here's my suggestion if this hasn't already been introduced or already given to a bot as a task:
A Bot that would go around and search for Old IP warnings/blocks (ones more than a month old [excluding block template, which would need more time]) and get rid of them by replacing them with {{ OW}}.
Best, P,TO 19104 ( talk) ( contribs) 01:39, 7 August 2020 (UTC).
It seems this issue has been discussed before in the following places:
It also seems in the past this was a very contriversial issue, so it was necessary to go to the VP. P,TO 19104 ( talk) ( contribs) 13:39, 7 August 2020 (UTC)
I am surprised to find that our hundreds of thousands of images falling under the Category:Fair use images structure have no categorization by date. This is important because all of these images will eventually fall into the public domain, based on the passage of time. Most images that have been uploaded have a "date" field, and although a large subset of these are filled out as "unknown", that also should be categorized. In short, I would like a bot to parse the images falling under this category and create and populate all needed subcategories for, e.g., Category:Fair use images created in 1952. BD2412 T 15:34, 7 August 2020 (UTC)
Website | Links |
---|---|
British Thoracic Society | 15 |
MeSH | 14 |
... | ... |
This might need two steps (per WAID, below) - generating a list of articles, and then generating a list of external links that have been used.
Make maintenance easier, by:
Discussions as to appropriate venue, and one not relevant to request
|
---|
|
Medical external template usage
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Transclusion counts of templates under the en:Category:Medicine external link templates. Main space (article space) only. Templates transcluded through Lua module were not (couldn't be) counted.
|
Thanks :) Close enough. I withdraw this request for the moment so that bot editors can focus on more worthy targets -- Tom (LT) ( talk) 23:52, 18 September 2020 (UTC)
Sorry for the messy entry, I'm not familiar with bot procedures but I wanted to flag this as it seems to have gone under the radar.
It would appear that a few months ago the content at Template:Germanic philology was merged into Template:Germanic languages, and the former was redirected to the latter. However, many pages included both templates, and so now they instead contain two copies of the same template (as the philology template simply reproduces the content of the languages template). For instance: Fingallian, Germanic philology. I am unsure how many pages this may affect.
To fix this, it seems it would be worthwhile to instruct a bot to:
Thanks. BlackholeWA ( talk) 02:44, 27 August 2020 (UTC)
Consider an open source software developed for the Wikipedia/Wikimedia movement, usually the tasks are tracked in tracking systems designed by and for software developer, such as Github Issues or Phabricators, but their major audience, i.e. people who cares their progress the most, people they need to solicit feedback the most, are on Wikipedia.
Hereby propose the idea to create a OpenSourceSyncBot to sync a Wikipedia page, e.g. mw:ORES/Synced_tasks with a search criteria in its relevant tracking system, e.g. ORES component on Phabricator.
Phase 1: for any tasks in the tracking system, sync them onto the subpage on Wikipedia, so it gives people more transparency and visibility to the development progress. The format could be a Wikipedia page table with "task title, progress, reportee, assignee". e.g. It will also periodically sync to update such information.
Phase 2: for newly added row on the Wikipedia page table, added by a Wikipedia user, it will create a new task on the external tracking system.
Proposer: xinbenlv Talk, Remember to "ping" me 23:06, 14 August 2020 (UTC)
For example, WP:Twinkle developers can better reach out their users Wikipedia:Twinkle#Reporting bugs or requesting features
(This is not a request for a bot. This is a request for a sanity check.)
WP:ELN is discussing the ==External links== section of Mary Tyler Moore. It contains (in part) this list:
* {{NYTtopic|people/m/mary_tyler_moore/}} * {{IMDb name|1546}} * {{tcmdb name|id=134771|name=Mary Tyler Moore}} * {{iBDB name|023123}} * {{findagrave|175697586}}
This is not an unusual set of links for BLP articles. Obviously, the exact list of links and the order they're presented in varies. Most of them use external link templates.
Imagine a future in which we developed a consensus that some/all of this "standard link dump" should be combined into a single template, perhaps similar to Template:Authority control. Am I correct that it would (if that magical future arrives) be a relatively simple matter for a bot to remove some of these (existing) items from this list and transform them into the new template, in at least most articles? If it's harder than it sounds, then I'd rather know that in advance. (Please ping me.) WhatamIdoing ( talk) 17:48, 19 July 2020 (UTC)
|imdb=1546
to have it kick out the IMDb link. I suppose that could be doable, but I don't think you'll ever get consensus to basically turn five templates into "five templates plus a wrapper template for them all".
Primefac (
talk) 18:26, 19 July 2020 (UTC)
{{new thing |NYTtopic=people/m/mary_tyler_moore/ |IMDb name=1546 |tcmdb name=134771 |iBDB name=023123 |findagrave=175697586}}
and have the template display the same links more compactly.
WhatamIdoing (
talk) 22:18, 19 July 2020 (UTC)
This isn't another request (straight away) for a bot request but rather (at the moment) only a request to see if anyone has the skills to create the code for it. I placed a request at Wikipedia:Bot requests/Archive 79#Civil parish bot and there was discussion at Wikipedia:Village pump (proposals)/Archive 160#Civil parish bot and User talk:DannyS712/Archive 12#Json format that coding was needed. The basic format is at User:Crouch, Swale/Bot tasks/Civil parishes (current)/Simple and I have attempted to do coding at User:Crouch, Swale/Bot tasks/Civil parishes (current)/Coded. I don't have the skills to do the JSON bit so I'm wandering if anyone does? If not then this can be archived and I can get on with looking at creating them manually, thanks. Crouch, Swale ( talk) 20:58, 23 September 2020 (UTC)
The current way of dealing with double redirects is slow and inefficient. A far simpler way to deal with them would be to simply have a bot that detects when a new redirect is created, either from a merger, or as a new page. If it finds a double redirect, it will fix it instantly. The current system is slow, and redirects can take several days to fix. If sinebot is able to sign posts in talk and user talk namespaces almost instantly, how hard can it be for a bot to fix double redirects faster? I-82-I | TALK 07:48, 29 August 2020 (UTC)
The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
I'm not sure if such a bot already exists, but shouldn't there be an automated script that tags categories under C1 if they remain empty for an allotted time (i.e.: six hours)? ToThAc ( talk) 22:04, 7 October 2020 (UTC)
Usually editors moving pages don't move the editnotice attached to it. Either they forget, or aren't admin/TE so they don't do it (or both). {{ Editnotice/notice}} categorises such cases into Category:Editnotices whose targets are redirects (which I've been updating for a while) but it often takes months for the job queue to go over transclusions and add moved pages into this cat (see this on VPT), which makes it hard to even do this manually. I'm thinking a bot would (a) be able to do this sooner and get around that issue and (b) actually just do the move automatically, suppressing the redirect. One way would be to listen to Special:Log/move and check if a editnotice for page exists, this could be done continuously. Another is to regularly loop over all transclusions of {{ Editnotice}} (or [[ Special:PrefixIndex/Template:Editnotices/) daily and do the moves. There's <20k so this is feasible, I think, but this would leave a period of up to 24 hours (ideally, the editnotice shouldn't just be disappearing for a day, especially when they're ones required for DS etc). Thoughts on these options, or other alternative methods? ProcrastinatingReader ( talk) 16:41, 9 September 2020 (UTC)
This is a formal request to recruit @ Yobot: to tag talk pages under WikiProject Phoenicia. Please tag the pages under Category:Phoenicia; no auto-rating. Thanks ~ Elias Z. ( talkallam) 13:34, 1 September 2020 (UTC)
For many years, U.S. college articles were using manually updated tables like this to represent admissions statistics. Following a WikiProject Higher Education discussion, we've begun replacing them with {{ Infobox U.S. college admissions}}, which uses data available from the Common Data Set (and I think also IPEDS) for everything (except the optional historical test score parameters). Symbols for historical data are chosen automatically using the new {{ Fluctuation formatter}} I created.
Would anyone be interested in starting work on a bot that could gather the data and use it to update the templates automatically every year? Given the number of colleges in the U.S., doing so will save likely hundreds of hours of editor work per year. {{u| Sdkb}} talk 20:06, 29 August 2020 (UTC)
Participation at AfD often requires considerable research and debate to find consensus. It's therefore understandable that people get frustrated when, sometime after it's closed, the article is renominated without them knowing. Given how few participants many AfDs have, it sometimes happens that a well attended AfD is overturned by a much smaller group. But even when that doesn't happen, the second (or subsequent) nomination loses out on the efforts of those who researched before.
Anyone willing to make a bot that would look for "nomination)" in the title (or some other method of determining renominations) and, based on an opt-in list, notify past participants (if they want)? — Rhododendrites talk \\ 04:11, 30 August 2020 (UTC)
Just to clarify, in case it's unclear, when I say "opt-in" I intended that to mean opt-in for the service, and not on the level of the individual AfD. i.e. "I want this in general" rather than "if this specific article is renominated, I want to be notified". — Rhododendrites talk \\ 19:25, 31 August 2020 (UTC)
Hello! I originally posted this on WP:AWBREQ, but a bot makes more sense. Currently, articles about curlers use various combinations of {{ Sports links}}, {{ WCT}}, {{ WCF}}, {{ CurlingZone}}, and other templates for external links, but they can all be simplified to just {{ Sports links}}, which would standardize our templates moving forward. Could a bot check all pages that use {{ WCF}}, {{ WCT}}, and {{ CurlingZone}}; remove those templates in the external links section (but not other article sections), along with {{ SR/Olympics profile}}, {{ IOC profile}}, {{ COC profile}}, {{ USOPC profile}}, {{ Olympedia}}, and {{ Olympic Channel}} (all of which are redundant with {{ Sports links}}); and then add {{ Sports links}} if it's not already there? Thanks! Allthegoldmedals ( talk) 11:59, 20 August 2020 (UTC)
More or less the same thing as
Wikipedia:Bots/Requests for approval/DYKHousekeepingBot, which Shubinator says they doesn't have time to revive. The idea is to crawl
Category:Pages using DYK talk with a missing entry, find the missing DYK blurbs, and add |entry=
to these article's {{
DYK talk}} templates on their talk pages.
For instance, 1st Polish Light Cavalry Regiment of the Imperial Guard has the DYK blurb (found in Wikipedia:Recent additions/2009/April)
In this case,
Talk:1st Polish Light Cavalry Regiment of the Imperial Guard should be updated with {{DYK talk|...|entry=... that [[light cavalry|light-cavalrymen]] of the '''[[Polish 1st Light Cavalry Regiment of the Imperial Guard]]''' saved [[Napoleon I of France|Napoleon]]'s life at least three times?}}
Headbomb { t · c · p · b} 04:40, 9 October 2020 (UTC)
This might be something that'd more have to be done with AWB (in which case I'd appreciate advice on how), but to lay it out: I fairly often come across instances of e.g. Star Trek: The Next Generation that are not italicized. I can think of very few instances where this wikitext would show up, including the link, but we would not want to italicize. Would it be possible to get a bot to go around and identify instances of missing italicizations and fix them? (Italicization obviously isn't the most pressing issue facing the 'pedia, but since it is visible to readers, I don't think WP:COSMETICBOT applies.) {{u| Sdkb}} talk 21:10, 7 September 2020 (UTC)
I can think of very few instances where this wikitext would show upare there any? If so, might not be a good task for bot, as it wouldn't be able to differentiate here. ProcrastinatingReader ( talk) 16:47, 9 September 2020 (UTC)
This would be a bot to remove 404 (the ones that appear red) links. — Preceding unsigned comment added by Moouser ( talk • contribs) 22:33, 16 November 2020 (UTC)
A recent change to the MediaWiki software has started assigning a tracking category,
Category:Pages with non-numeric formatnum arguments, to pages that contain invalid input to formatnum
, which is supposed to be given only numeric input. I have edited a few templates to get the article count down from about 150,000 to the current 31,000, but there are some instances of errors within articles that need to be corrected.
One of the errors is invalid input to currency templates, including {{
US$}}, {{
CAD}}, and other templates in
Category:Currency templates. The invalid input often looks like {{US$|75 million}}
, which should be written {{US$|75}}{{nbsp}}million
.
Here's a sample fix.
This search shows some of the 500+ articles that have invalid text in {{ US$}}. The "insource" regex in the search shows the most common construction of the invalid text, and creating a regex to fix the affected articles should be easy. The tricky part is doing the same fix for about 50 templates and their redirects.
Is there anyone here who would be willing to work with me to fix these errors? I can create a list of probable articles and templates that are involved (although I don't know how to create a list of all of the possible redirects). I estimate that the affected article count is between 1,000 and 3,000. – Jonesey95 ( talk) 15:29, 29 September 2020 (UTC)
formatnum
can produce "unreliable output". It looks like the MW developers have deprecated and started tracking this non-numeric input (see
T237467 and
T263592) as of sometime in the last week, so we either need to fix existing uses or write a new template. It would be great to have a new template that does what formatnum
does; if you start developing such a template (it should have a better name than the poorly chosen "formatnum"), ping me and I'll help with QA. –
Jonesey95 (
talk) 18:19, 29 September 2020 (UTC)
A lot of
rcat templates specify the
printworthyness of
redirects through the |printworthy=
parameter of {{
Redirect template}}. All of these have, in their
documentation, a notice asking editors to also add {{
R printworthy}} or {{
R unprintworthy}} (as appropriate) to redirects categorised by the template, if in the
mainspace. However, very few editors actually take notice of this instruction, so how about a bot to do this instead?
The bot would be implemented (I hypothesise; I've never actually done this myself) by running through Category:Printworthy redirects and Category:Unprintworthy redirects, checking if each page includes {{ R printworthy}} or {{ R unprintworthy}}, and adding the relevant template if the answer is no (within an {{ Rcatshell}} if there is one).
Any thoughts? WT79 ( speak to me | editing patterns | what I been doing) 17:13, 9 September 2020 (UTC)
|printworthy=
parameter adds the pages to
Category:Printworthy redirects, if they are in the main namepace. However, {{
R printworthy}} / {{
R unprintworthy}} are the standard
Rcat templates to use to mark redirects as
printworthy / un
printworthy; these are used separately to other templates on the redirect. They may be used other rcats which specify printworthyness, so isn't just part of {{
Redirect template}}, which is only supposed to be used as a meta-template. If their functionality was merged into {{
Redirect template}}, and {{
R printworthy}} and {{
R unprintworthy}} replaced with '{{Redirect template|printworthy=<!--yes or no as appropriate-->}}
', a reverse problem would be caused as {{
R printworthy}}/{{
R unprintworthy}} would need to be removed from pages where printworthiness is already specified, to avoid duplication.
WT79 (
speak to me |
editing patterns |
what I been doing) 14:23, 14 September 2020 (UTC) (edited 16:16, 12 October 2020 (UTC))Scrolling through Wikipedia:Typo Team/moss/E, I noticed that a majority of typos marked are incorrect spacing after periods.As an example, I would like to name the typo I just made between "periods" and "as". Now, to qualify for correction, the words would have to:
Interested to hear what you think. Opalzukor ( talk) 16:11, 16 September 2020 (UTC)
I very often come across talk pages that are archiving either way too aggressively or (less frequently) not at all aggressively enough. Since the frequency of new talk page threads is something quantifiable, I'd think it'd be possible to use an algorithm to determine when this is happening and automatically adjust the archiving period. I envision that this would be only for mainspace talk pages, since non-mainspace pages have differing desires for how long old threads ought to stick around. Integrating with the current manual adjustment system would be tricky, but this could eventually save a bunch of editor effort and make talk pages function better. {{u| Sdkb}} talk 04:29, 13 October 2020 (UTC)
|age=auto
at
User:ClueBot III/ArchiveThis and make it suggested/default, which handles the issue for new pages going forward. Then, once that's been established for a while, we could start considering mass switches for existing pages, but even then I'd assume we'd want to allow opting out. {{u|
Sdkb}}
talk 23:44, 13 October 2020 (UTC)
Related to this: a bot to keep automatic archival information templates such as {{
archives}} or {{
auto archiving notice}} in synch with actual bot parameters. That is, if we change |algo=old(60d)
to |algo=old(90d)
(this example uses
User:Lowercase sigmabot III syntax) a bot could come in and change |age=60
to |age=90
of such a template, if present.
CapnZapp (
talk) 17:05, 21 October 2020 (UTC)
|algo=
value.
Primefac (
talk) 19:34, 8 November 2020 (UTC)On certain pages, it would be useful to have a bot automatically do null edits after a certain period. I'm thinking placing something {{ Bot purge}} like
{{Bot purge}} <!-- Purges every day (00:00:01 UTC)--> {{Bot purge|12 hours}} <!-- Purges every 12 hours (00:00:01 UTC; 12:00:01 UTC)--> {{Bot purge|1 hour|mode=null}} <!-- Null edits every 1 hour (00:00:01 UTC; 01:00:01 UTC; 02:00:01 UTC...)--> {{Bot purge|15 minutes}} <!-- Purges every 15 minutes (00:00:01 UTC; 00:15:01 UTC; 00:30:01 UTC...)--> {{Bot purge|UTC=20:00:00}} <!-- Purges at 20:00:00 UTC every day-->
on a page, and then the bot taking its instructions from there. Headbomb { t · c · p · b} 18:54, 22 October 2020 (UTC)
|mode=purge
vs |mode=null
for the cases where it matters. I know that for the usages I have in mind, purges are insufficient. Ultimately it doesn't really matter, as long we have a scalable user-friendly way to get bots to purge/null edit certain pages.
Headbomb {
t ·
c ·
p ·
b} 20:28, 22 October 2020 (UTC)
{{Bot purge}}
is actually used. So if you have it on e.g.
User:AAlertBot/Status2, then only that page would get bot-purged, and not the pages that transclude
User:AAlertBot/Status2. But I'm spitballing ideas here, it could be handy to have transclusions get purged too. Perhaps |scope=transclusions
/ |scope=this page
? Limiting to metaspace (i.e. not articles, not mainspace talk) would also likely be a good initial limitation.
Headbomb {
t ·
c ·
p ·
b} 22:54, 22 October 2020 (UTC)
only that page would get bot-purged, and not the pages that transclude...: That is not how the job queue works, AFAIK. Pages that are null-edited get put in the job queue to have their transclusions null-edited as well (eventually). I think a purge runs only on the purged page, though, with no downstream effects. – Jonesey95 ( talk) 23:00, 22 October 2020 (UTC)
{{subst:void}}
I believe, and that certainly works.
Headbomb {
t ·
c ·
p ·
b} 21:11, 23 October 2020 (UTC)
forcelinkupdate
requests through the API. --
Redrose64 🌹 (
talk) 10:15, 24 October 2020 (UTC)
For the record, I speedily approved this, and leave implementation details to @ ProcrastinatingReader: and the community in general. If anyone has a problem with that, I can rescind approval. Headbomb { t · c · p · b} 18:32, 29 October 2020 (UTC)
Hello. I'm not too familiar with Wikipedia bots, but I'm wondering if one exists that eliminates double spaces in pages ("[][]" instead of "[]"). I do a lot of control-F work to eliminate these spaces, but I think this is the kind of task that would be best completed by a bot. Thank you, KidAd talk 23:31, 2 November 2020 (UTC)
_+
replace with _
(where _ is a space), but that does require human review a lot of the time.
Headbomb {
t ·
c ·
p ·
b} 00:50, 3 November 2020 (UTC)
<pre>...</pre>
element, where an element is styled with the declaration white-space:pre;
. So opening up an edit to reduce those spaces in the wikisource is simply a waste of time. --
Redrose64 🌹 (
talk) 21:04, 4 November 2020 (UTC)A football position article of
Guard (American and Canadian football) was moved on 27 August 2019 to
Guard (gridiron football) per
[105] with an edit summary of "moved page Guard (American and Canadian football) to Guard (gridiron football): to match Tackle (gridiron football position) and Center (gridiron football)"
AWB currently matches 1575 links to
Guard (American and Canadian football).
Can we search and replace to bypass redirects in two capitalization formats, like:
1. [[Guard (American and Canadian football)|Guard]] --> [[Guard (gridiron football)|Guard]] 2. [[Guard (American and Canadian football)|guard]] --> [[Guard (gridiron football)|guard]]
Any "missed" links/redirects should be few and I can manually (or AWB) correct them.
UW Dawgs (
talk) 20:58, 6 December 2020 (UTC)
According to quarry:query/49607, there are currently 2,585 articles tagged as Redirect-Class by at least one WikiProject that are not actually redirects, and 2,179 if you further exclude disambiguation pages. These incorrectly tagged articles are likely to receive less attention from the WikiProject as a result, and I can't imagine any good reason why a project would want to leave non-redirects tagged as Redirect-Class. Thus, two questions:
Vahurzpu ( talk) 17:57, 9 November 2020 (UTC)
{{
WikiProject Military history}}
, there should never be any need to explicitly set |class=redirect
(or equivalent), because all WikiProjects that provide Redirect-Class (other than Military history) also have code in their banners that will autodetect that a talk page is that of a redirect. So either altering it to the valueless form |class=
, or removing the parameter entirely, will both work. In my opinion, the first method is best for pages in the main Talk: space, since an explicit value (stub, start etc. will need to ba added later on; but the second method is more suited to all other talk spaces, becuse the namespace is autodetected so the page will be automatically placed in Template-Class or similar, as applicable.|class=rdr
, |class=red
, |class=redirect
(all case-insensitive) - it does not recognise |class=redir
that the others all allow. So for talk pages having {{
WikiProject Military history}}
and one of those three values for |class=
will need to be individually checked to see if the talk page is that of a redirect - if it is, the value in |class=
will need to be left alone. --
Redrose64 🌹 (
talk) 11:08, 10 November 2020 (UTC)
{{ resolved}} This is a request for a mass undo of about 200 messages delivered at 18:43 and 18:44, 8 December 2020 (UTC time) by the MediaWiki message delivery service. See its recent contributions and Special:Log/massmessage. I queued a message for delivery about 15 hours before that time stamp, and all messages were delivered, and then through some apparent hiccup, a subset of editors received the message again, 15 hours later.
I don't know of an easy way to undo those 200 edits (194 to be precise, I believe). Is there someone with some sort of script/bot/privilege who is able to quickly and easily undo them? Thanks in advance. – Jonesey95 ( talk) 19:29, 8 December 2020 (UTC)
Related discussions:
A change back in June to the introduction shown to new users has resulted in new pages being created as subpages of Draft:Sample page when users complete the introduction without logging in (see Special:PrefixIndex/Draft:Sample_page). These are essentially individualized sandboxes, and should be routinely deleted - they're test pages by definition so WP:G2 applies, and they often contain material that qualifies for deletion under other speedy criteria. Can someone code an adminbot that will look for these subpages and delete them, maybe if they have not been edited in a few days? Ivanvector ( Talk/ Edits) 14:45, 30 September 2020 (UTC)
A bot would be useful on Wikipedia:Translators available for the sorting the lists of users, in each section, by the date of last edit (descending). I'd suggest running the task monthly. – SD0001 ( talk) 12:19, 28 September 2020 (UTC)
Is there a bot that adds the {{DEFAULTSORT}}
magic word to articles that need it but don't have it? I have a list of over 1k television "List of episodes" articles that don't have DEFAULTSORT. Cheers. -- /
Alex/
21 09:52, 2 October 2020 (UTC)
[[Category:Star Trek: Enterprise episodes| ]]
[[Category:Star Trek episode lists|Enterprise episodes]]
[[Category:Lists of American science fiction television series episodes|Star Trek: Enterprise]]
There are three main cases to consider:
Note that the lists and categories in question might not use identical phrasing for some strange "local consensus" reason, so determining which case applies probably wouldn't be a good bot task. ― cobaltcigs 08:28, 17 October 2020 (UTC)
I was going to copy all films from the American television films category into the American films category (using Cat-a-lot), because the template on the American films category specifically tells editors to do this. However, an administrator objected because he did not want his watchlist to be full of hundreds of minor edits. He then requested that I get a bot to transfer articles from non-diffusing subcategories into the appropriate parent categories. I have taken into account the fact that such a bot may transfer categories that were inappropriately placed, though. Is this still an acceptable proposal? Scorpions13256 ( talk) 20:31, 14 September 2020 (UTC)
@ Scorpions13256: Just to clarify, is this a request for all items in the sub-cats of American films, or just American television films? Mdann52 ( talk) 21:57, 19 October 2020 (UTC)
Hi all, there are about 650 articles which were previously
peer reviewed. However, because of article moves, the links to the reviews are now broken.
Category:Pages using Template:Old peer review with broken archive link. See for example
Talk:Battle of the Catalaunian Plains. I'm seeking bot help repairing the 650 links. Essentially, the bot will need to go through each article in that category, determine what the name of the article was when the peer review was closed, and then update the template {{
Old peer review}} on the current article talk page by adding |reviewedname=the old name
. Extra useful if the date can be found and inserted too (|date=date the review was closed
). --
Tom (LT) (
talk) 00:25, 19 September 2020 (UTC)
Hi. I would like to own a bot to give me assistance with reverting vandalism and warning users who have vandalised Wikipedia pages. I would like the bot to be called OverriddenBot, since my username is Overridden and that’s what I chose for the bot. Thank you. Overridden ( talk) 08:26, 19 December 2020 (UTC)
We have numerous articles with titles like Thomas Williams (Alabama), using the state alone as a disambiguator. These are, as it turns out, disfavored because the person who is the subject of the article is not an example of an Alabama. However, it's a pain in the ass to dig them out and fix them manually. What I would ideally like is a bot to find all biographical articles with titles that are Person's name (State) and replace them with Person's name (State profession) (in the above case, it would be Thomas Williams (Alabama politician)), and then update all incoming links to that as well. I recognize that this can be tricky, because many people have multiple professions and it may require a human eye to choose the best disambiguator, but I think there are some broad categories that can be done automatically. For example (again, as with Thomas Williams (Alabama)), anyone who has served in the United States Congress can almost certainly be disambiguated with "politician" for their profession. Since many of the issues will be with members of Congress auto-generated at these titles in the first place, that should handle a good number of them. BD2412 T 03:03, 13 October 2020 (UTC)
_
) instead of spaces. –
Majavah
talk ·
edits 17:31, 13 October 2020 (UTC)
Interstate 635 (Texas), Colorado River (Texas), Toyota Stadium (Texas), etc. are obviously not "examples of a Texas" either. Why are "state-only disambiguators" only "disfavored" for politicians' names? ― cobaltcigs 08:01, 17 October 2020 (UTC)
[[... (OCCUPATION born YEAR)]]
(plus or minus a comma, and unless their birth years are unknown), because non-politicians tend not to be strongly associated with a particular state.[[... (Bavaria politician)]]
and [[... (Baden-Württemberg politician)]]
are used anywhere—because we
assume readers know U.S. states but not German states. Falling back on the birth year is probably most common for them as well.Hi, I'm looking to get a bot to update the
Template:IMDb episodes links in TV season articles, by adding |season=x
to the link template with x being the season number, so the link will directly point to that respective season on IMDb; when |season=x
is not specified in the template, it just links to the most current season. I'm guessing the bot can just grab the season number from the article title? This would only need to be done for season articles; example,
Fargo (season 4) while the IMDb links for
Fargo (TV series) and
List of Fargo episodes can remain unchanged. Thanks.
Drovethrughosts (
talk) 16:26, 27 October 2020 (UTC)
An example: {{Imdb episodes|2802850|Fargo}} would be changed to {{Imdb episodes|2802850|Fargo|season=4}}
{{
IMDb title}}
like in
Twin Peaks (season 3) to use {{
IMDb episodes}}
.A common mistake is to type "Wikiproject" instead of "WikiProject" to get to pages like Template:WikiProject Physics or Wikipedia:WikiProject Physics. So a bot that would automatically create those would be really useful.
This should only be the base pages, not the subpages like Wikipedia:WikiProject Physics/Quality Control. Headbomb { t · c · p · b} 16:51, 31 July 2020 (UTC)
Category:Redirects tagged as disambiguation pages contains lots of talk pages of redirects which are incorrectly tagged with {{ WikiProject Disambiguation}} (or one of its 45 redirects). Please could a bot:
|class=disambig
from any other project banner on the page.Thank you — Martin ( MSGJ · talk) 17:55, 29 October 2020 (UTC)
we should keep banners on talk pages where the corresponding mainspace page is an actual dab– this is precisely where the banner is not needed. If the talk page has other content, the banner doesn't hurt. But if the banner is all that is there, then, as the template's documentation makes clear, the page shouldn't have been created in the first place. – Uanfala (talk) 20:22, 29 October 2020 (UTC)
Apparently this is a lot more complicated/controversial than I envisioned it would be, so I withdraw the request for now. If there is a reliable way of determining whether the target of these redirects is a dab page then I may return. — Martin ( MSGJ · talk) 08:07, 30 October 2020 (UTC)
This task should be relatively simple. Find all cases of redirects like
and mark them with {{ R to diacritics}}
Then find all cases of redirects like
and mark them with {{ R from diacritics}}
Obviously pages that are already tagged should be skipped. Headbomb { t · c · p · b} 03:45, 17 September 2020 (UTC)
A common PMR request is to delete redirects following draftification. I believe this is covered under WP:R2. See Quarry - we only have 6 of these pages currently, so they do usually get suppressed or tagged. I imagine it's been discussed before, but I couldn't find it in BOTREQ archives: why, instead of a lot of manual deletions and PMR reqs, does a simple bot (well, adminbot) not just delete these auto after a bit of time elapses? With a basic check to ensure the redirect has no real history. ProcrastinatingReader ( talk) 09:01, 2 November 2020 (UTC)
after a bit of time elapsespart. When vandals draftify articles, those drafts are reverted in minutes/hours usually by patrollers. So if there's a 12 hour delay I don't think it could be abused? ProcrastinatingReader ( talk) 11:06, 2 November 2020 (UTC)
ensure the redirect has no history(I would have the threshold very low, as maybe 2 revisions or less). I don't see how you could then use the bot to vandalise; if you move an article to draftspace (and the redirect gets deleted), the article can just get moved back; if you just replace an article with a redirect to draftspace, it does not get deleted because it has some history. Seems like a lose-lose (for the vandal) situation. WT79 ( speak to me | editing patterns | what I been doing) 16:15, 2 November 2020 (UTC)
I don't see any vandalism problem.Example: user moves a low-watched but perfectly satisfactory page to the draft space, which is then R2'd by the bot. The page was written five years ago and all of the original editors are inactive. Thus, no one notices, and it's deleted after six months per WP:G13. I can think of a half-dozen others, but per WP:BEANS I'll just stick with the most obvious. Primefac ( talk) 17:18, 2 November 2020 (UTC)
Some years ago, I edited a number of articles about the Civil rights movement. I cited Civil Rights Movement Veterans ( https://www.crmvet.org) as an information source. Other Wiki editors also cited that website as a source in their articles and edits. Last year it changed its name to "Civil Rights Movement Archive," but all their URLs remained the same.
By hand, I edited the Civil rights movement article to change the name of the source from "Veterans" to "Archive." My computer skills are primitive and it was far too time consuming. I did a Wikipedia search for "Civil Rights Movement Veterans" (note quotes) which returned 108 wiki articles.
Could someone run a bot to automatically change "Civil Rights Movement Veterans" references to "Civil Rights Movement Archive" (leaving all URLs unchanged)?
Thanks
Brucehartford ( talk) 18:31, 15 November 2020 (UTC)
No doubt you're correct. Unfortunately, my computer skills are not up to AWB. I looked at it, and it was beyond me. Thanks for checking into it though. Brucehartford ( talk) 00:15, 18 November 2020 (UTC)
This is a fairly self-explanatory task; I checked with AnomieBOT, which currently does most template substing, a few days ago, but it was deemed too difficult to determine which transclusions were in section headers. See the template's documentation for details on why substitution is necessary. WT79 ( speak to me | editing patterns | what I been doing) 16:52, 6 November 2020 (UTC)
/* {{anchor|Foo}}Bar */ [rest of summary]
, even though Foo is normally invisible to readers, and the entire string, with curly brackets, does not work as a section link. This is not generated by the raw html, which is ignored by the summary generator. We need to clean up these existing uses.
WT79 (
speak to me |
editing patterns |
what I been doing) 21:42, 6 November 2020 (UTC)
There are about a thousand articles that contain the parameter
Ship sail plan = Full rigged ship
but should be
Ship sail plan = Full-rigged ship
according to well-known dictionaries and common understanding of compound modifiers. About a hundred or so are unlinked, and it wouldn't hurt to link them while we're at it. There may or may not be spaces on either side of the equals sign. Chris the speller yack 17:37, 13 November 2020 (UTC)
When someone changes a section name, there's no indication that someone else somewhere on Wikipedia might have created a link to that section that will be broken by the name change. I occasionally come across instances of such broken anchor links. Is there any bot patrolling for this and changing links (or, if that would be disruptive in some cases, adding an {{ anchor}} to the destination page)? If not, I'd think we'd want to set that up. {{u| Sdkb}} talk 05:53, 20 September 2020 (UTC)
I recently remarked on the discord that people fairly frequently misspell my username, sometimes resulting in missed pings, and several others chimed in that they have the same or a similar issue. It would be nice to have a bot that could work off a whitelist of common misspellings of usernames, and fix them/ping the editor. We'd probably want a little oversight of the list to prevent abuse, but otherwise it'd hopefully be pretty straightforward. We might have it append some smalltext, similar to {{ Unsigned}}. {{u| Sdkb}} talk 21:55, 9 November 2020 (UTC)
Hi! An editor probably tried to mention you (link to diff) on page (link), but misspelled your account name. (Sent in error? Report here.) Ovinus ( talk) 07:17, 10 November 2020 (UTC)
Correcting misspelled username of C0mpl1c8tD NamE on behalf of BadSpellr (report error)would be sufficient. That would get around any trickiness with pings of multiple users, etc., since it wouldn't append any smalltext.
Following discussion in the first half of 2020, Template:Infobox dog breed underwent a minor redesign to reduce the focus on kennel clubs from English speaking countries [108]. As a result a number of parameters were deprecated but remain in many of the 627 transclusions. It is requested that a bot be tasked to remove the following deprecated parameters from these pages:
| patronage = | fcigroup = | fcisection = | fcinum = | akcgroup = | akcstd = | akcstd1 = | akcstd2 = | akcfss = | akcmisc = | ankcgroup = | ankcstd = | ankcstd1 = | ankcstd2 = | ckcgroup = | ckcstd = | ckcstd1 = | ckcstd2 = | ckcmisc = | kcukgroup = | kcukstd = | kcukstd1 = | kcukstd2 = | nzkcgroup = | nzkcstd = | nzkcstd1 = | nzkcstd2 = | ukcgroup = | ukcstd = | ukcstd1 = | ukcstd2 = | otherstd =
This is not a war stopper, but it may cause some confusion for unknowing editors in the future. Kind regards, Cavalryman ( talk) 22:46, 7 January 2021 (UTC).
Hi all, I was wondering if anyone was interested in developing a script for talk pages to automatically role templates like DYK, GA and PR into an {{ ArticleHistory}} format? I occasionally Wikignome, and it occurs to me such a script would likely be very useful for myself and many other editors, by automating a fairly time consuming manual process. The benefits will be more readable and organised talk pages, as well as a more comprehensive history for some articles. What a noble goal! -- Tom (LT) ( talk) 06:44, 27 July 2020 (UTC)
I posted at village pump earlier and didn't get any responses, so I assume such a script doesn't exist, therefore I thought I'd ask here :). -- Tom (LT) ( talk) 06:44, 27 July 2020 (UTC)
@ Hawkeye7 could FACBot or MilHist bot be customised to run on new good articles or peer reviews every so often? -- Tom (LT) ( talk) 04:42, 19 September 2020 (UTC)
Sorry, I was just made aware of this discussion: please see Taming talk clutter from 2008 and read this discussion at Village Pump Technical.
Gimmetrow and Dr Pda designed the article milestones. Gimmetrow's old bot (Gimmebot) rolled EVERY content review template into the article milestones, so it can be done-- that is GA, PR, FA, everything. But more, he ordered the events logically and sensibly, and I have been going through and trying to fix at least the October FAs, since a) all templates are no longer rolled in by bot, b) some GA passes use faulty templates, c) many DYK noms do not identify the nom page, d) some processes are not providing oldids, and e) OTD is off doing their own thing, dumping clutter on to talk pages outside of the article milestones.
No, oldid is not REQUIRED for proper display, but neither is it hard to find. Dr pda also used to have a script that returned an oldid based on any timestamp. ALL OF THIS was accomplished more than a decade ago, so I'm sure it can be now. And the point of the milestones is to always be able to click back on any date and see what the article looked like at the time of that event.
GimmeBot processed every GA and every FA and every PR. If any one is going to take this on, please try to return sensible ordering of the milestones as they used to be and as I have been correcting them, eg, here. Separate each event, in order, and put the rest of the important stuff at the bottom. And get OTD and ITN on board, and figure out why DYK isn't providing nom pages. Happy to help if someone is going to take this on; as of now, I am repairing all FACs and FARs manually. See my contribs. SandyGeorgia ( Talk) 21:19, 1 November 2020 (UTC)
Enterprisey starting over with a brain dump of everything I know that is going wrong with Template:Article history, and things that might be done to fix the issues. Historically, when GimmeBot was doing everything, there was very little manual intervention. Since the demise of GimmeBot, we have different processes going different ways, nothing standardized, and some editors intervening manually and causing errors. This will be partially an exercise in getting everyone back on the same page.
In no particular order of priority:
I will add to this list as I recall other things ... SandyGeorgia ( Talk) 13:38, 2 November 2020 (UTC)
From user talk:SD0001:
... would there be a way to do a bot report of pending articles that have titles identical to titles on other Wikipedias, with links to those foreign-language Wikipedia pages? If an article exists on another Wikipedia, it's a good indication that the draft should be approved. Thanks, Calliopejen1 (talk) 18:19 pm, 11 November 2020 (UTC)
I don't think using wikidata is an option since AFC drafts are very unlikely to have been linked to wikidata. Is there another way this could be done? – SD0001 ( talk) 06:33, 13 November 2020 (UTC)
Using a SQL query this would be a cross database join. My bot "shadows" does something similar where it looks for File:'s that have the same name on Commons and Enwiki. It's pretty fast and Commons has 60 million File: pages, which exceeds the total of all mainspace pages in all wikis by a fair amount, it is fast. The problem is Wikitech is redesigning the SQL servers and cross database joins will soon no longer be available. A Phab is open to try and find a solution. If you would like to follow developments see Phab T267992. -- Green C 19:03, 17 November 2020 (UTC)
I come across way too many article talks, like Talk:Jennifer Lawrence, where the {{ Archives}} causes that ugly overlap. It happens whenever the template isn't at the bottom of the list of talk banners (view source to see what I mean). To fix, we'd need a continuous bot to make sure this template keeps getting moved to the bottom of talk page banners. I don't think a CSS fix is really possible for this, and a JS fix would not be preferable to just having a bot maintain talk pages. I've made a discussion on the talk last week, see Redrose's response there for useful info as well (perhaps a broader bot for that purpose should be considered). It reminds me that another issue we see is DS templates constantly in the wrong order, it's advised by the template itself, and WP:TALKORDER, to have them below the talk header. Yet they seem to be scattered randomly. We commonly have random whitespace in talk page banners, too, thus random newlines. Really a bot to clean all this up would be a good idea, and enforce order (except when opted-out, I suppose). ProcrastinatingReader ( talk) 22:04, 2 September 2020 (UTC)
Have a bot remove a user from the category Category:Wikipedia usernames with possible policy issues when they have been inactive for over one year or have been blocked indefinitely. Heart (talk) 03:15, 9 October 2020 (UTC)
Hey I need a simple bot that could be able to add words to the links I send it. Maybe have the option where to add the text, but also have an option to remove all the text that you put in the bot once it comes across one of the words on the links. Might've not expressed myself the best but I hope you guys got my message. — Preceding unsigned comment added by JokerLow ( talk • contribs) 23:51, 5 January 2021 (UTC)
Hello, I'm here for requesting a bot to make an article alert page for WP:WILDFIRE wikiproject, like [[WP:CALI] and WP:USA does. --🔥 Lightning Complex Fire🔥 17:51, 8 January 2021 (UTC)
Per MOS:REFPUNCT, citations are supposed to go after punctuation like periods and commas, not before it. This is already included in GENFIXes, but I think it's noticeable enough to readers that it'd be good to have a bot working on it; it's not really WP:COSMETICBOT to my reading. Yobot has an approved task for doing this, but given how many pages I've come across with this issue, I'm guessing it's no longer working. {{u| Sdkb}} talk 20:29, 10 January 2021 (UTC)
I resumed the bot task. If there is any problem, please report it immediately. -- Magioladitis ( talk) 09:44, 14 January 2021 (UTC)
The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
Sansoni (publisher) is an old and important Italian publisher, whose page was recently created.
There are hundreds of pages with Cite book templates for works published by Sansoni.
It would be useful to link them to the publisher page.
So my proposal is that a bot should look for instances of {{ Cite book }} where there is one of these parameters:
|publisher=G. C. Sansoni |publisher=G.C. Sansoni |publisher=Sansoni
And replace it respectively with:
|publisher=[[Sansoni (publisher)|G.C. Sansoni]] |publisher=[[Sansoni (publisher)|G.C. Sansoni]] |publisher=[[Sansoni (publisher)|Sansoni]]
The replace should only be done on the first instance in each page, of course, to avoid excessive wikilinks.
Thank you in advance!
-- Lou Crazy ( talk) 02:21, 14 January 2021 (UTC)
Please remove all files in this category, because it's not necessary (all files in this category are out of copyright since this year). 185.172.241.184 ( talk) 09:24, 15 January 2021 (UTC)
I've been doing this by hand today and I thought maybe a bot could? To avoid complications we could start doing this with a category that has ~100 articles and check for mistakes.
More info here: /info/en/?search=Wikipedia:Writing_about_women
Samiwamy ( talk) 18:35, 18 January 2021 (UTC)
Thanks everyone for the info
Samiwamy (
talk) 18:59, 18 January 2021 (UTC)
The Torah obligates a man to not deprive his wife of food. Hume Cronyn appeared
alongside Jessica Tandy, his wife of over fifty years. More controversially, some people actually are notable mainly for being the wife/husband/son/mother/whatever of someone more famous. Certes ( talk) 19:24, 18 January 2021 (UTC)
Prince Bernhard of Lippe-Biesterfeld, husband of Queen Juliana of the Netherlands, unveiled the [ Statue of Maria van Riebeeckis correct to imply that the prince is mainly notable for being married to the better known queen. Would "married to" be an improvement there? Certes ( talk) 22:41, 18 January 2021 (UTC)
I have seen many users who have been blocked indefinitely for various reasons (socking, disruptive editing, CIR, and what not), but they receive many newsletters, and other notifications. Currently, there is
User:Yapperbot/Pruner to remove inactive users from lists (WikiProject membership, FRS, etc), notifying the removed users appropriately.
I am not sure what is the extent of this task. Would it be feasible to spend resources on creating a bot task to add {{
nobots}}, and "category:wikipedians who opt out of message delivery" on the talkpages of users who have been blocked indefinitely, and do not have {{
unblock}} on talkpage for more than 30 days? That way, resources can be conserved by avoiding new bot messages being posted, and later being archived. In case the user returns after a while, or after standard offer, they can simply remove the "nobots", and the category. Opinions are welcome. Regards, —usernamekiran
(talk) 13:22, 15 September 2020 (UTC)
My specific need is to find all talk pages with the following six tags and remove all six of them. I would think that, if the "table" mechanism is generalized, then it could be used by others, so my preference would be a BOT named FindAllTheseTags_ThenRemoveAll (long name, but more descriptive than FindALLremoveALL).
My list of tags is:
To ensure clarity of the spec: Only Talk pages with ALL SIX are to be fixed.
The reason for this BOT is to counteract the still-existing after-effects of a BOT that, back in 2008, tagged talk pages with THE ABOVE SIX tags. My BOTREQ request is on the basis of my HelpDesk request, which directed me here. Pi314m ( talk) 12:33, 24 January 2021 (UTC)
This discussion has now progressed to Template talk:WikiProject Food and drink#2008 hangover: six tags, 15,000 cases. -- Redrose64 🌹 ( talk) 16:23, 25 January 2021 (UTC)
I frequently find sections with "unreferenced" tags that do have references, like this one. Is there a bot that can replace these tags with {{ refimprove}}? Jarble ( talk) 19:54, 22 January 2021 (UTC)
The bot would scan recent reverts and inspect the page history. It will then analyse the number of reverts against pre-set thresholds. If one of these thresholds are met, it files an automatic report to WP:RPP requesting page protection.
Example thresholds could be:
I have no programming experience with Wikipedia so unfortunately I won't be able to program this. Eyebeller ( talk) 07:59, 18 November 2020 (UTC)
Hello! I posted a comment over at the Village Pump and was directed here, so I'll copy here:
I think it'd be cool if a bot could be designed to add Talk page notifications when the subject's article is promoted to Quality status at another Wikipedia project. To pick an artbitrary example, a notification could have been added to en:Talk:G.U.Y. when hu:G.U.Y. was promoted to quality status.
Added benefits could be editors comparing different language versions, encouraging translation efforts, and more editors becoming familiar with Wikidata, depending on the notification's text and bot design. I could also see notifications being posted to WikiProject talk pages, etc.
Thoughts? Concerns? Other feedback? Sorry if this idea has been brought to the table before. --- Another Believer ( Talk) 15:38, 24 November 2020 (UTC)
I think this is technically difficult to do using a bot. Only reasonable approach I can think of is if we knew the name of the GA template on a given wiki (given that, although we use Legobot, other wikis probably do it manually with differently named templates), we could patrol its recent changes, check for addition of template, and then lookup Wikidata link to find the enwiki article and add a talk page message. Otherwise, this is probably better as a userscript with some kind of "Check other wikis for GA status" button in the toolbar. ProcrastinatingReader ( talk) 12:04, 5 December 2020 (UTC)
When uploading images to Wikimedia Commons, I often notice that there are no category redirects for the common names of most species, so there are too many redirects that need to be created manually. Is there a bot that could create these missing redirect pages, using data from Wikispecies or WikiData? For example: commons:Category:Red fox is {{category redirect|Vulpes vulpes}}. Jarble ( talk) 18:23, 10 December 2020 (UTC)
AT Wikipedia talk:Moving a page#Updating archive bot settings when moving a page you can learn PrimeHunter has recently created Category:Pages where archive parameter is not a subpage, and that by far the biggest reason for pages to end up there is that Wiki editors move pages without updating talk page archival bot instructions.
But why should humans have to do menial tasks like that at all?
I assume when the bots were created there were no real standards and practices regarding auto archiving, but now there is. Seems to me we can avoid needless administration (and a lot of pages that don't archive properly) if we change the code of the two main archival bots to assume the standard naming as the default. If the |archive=User talk:Example/Archive %(counter)d
parameter (Lowercase Sigmabot III) and the |archiveprefix=User talk:Example/Archive
(ClueBot III) parameters could be made optional we could remove them from the standard instructions while still allowing manual override for the (few) cases where it's needed. This should mean that moving a page would no longer break auto archiving.
Of course, if there were a good reason this wasn't implemented back when, feel free to enlighten your audience :) CapnZapp ( talk) 09:59, 7 January 2021 (UTC)
|archive=
(minus the subpage) is a redirect, and if it has any subpages matching the subpage pattern that are non-redirects. So in that way it could be automated. For ones that don't meet the criteria, it's likely post-move cleanup is needed and it could build a report.
ProcrastinatingReader (
talk) 13:17, 7 January 2021 (UTC)
|archive=
optional. A bot cannot check a parameter if it isn't there. It would have to look for moves in those cases. Moves aren't logged at the target name so it would have to examine the page history or incoming redirects. If somebody copy-pastes the talk page instead of moving then there might be no trace. Not demanding a subpage name will also increase the number of poor archive parameters when somebody copies the archive parameters from a random page with very different activity.
PrimeHunter (
talk) 22:39, 7 January 2021 (UTC)Thank you all for your consideration so far, @ PrimeHunter, Primefac, Redrose64, and ProcrastinatingReader: Are you saying the occasional "overarchiving" (or whatever you feel is an appropriate title for the issue you have brought up) is deemed more disruptive than the (presumably) much larger load on human administration? That a big reason the bot writers mandated the archive name was so nothing was ever archived in the wrong place, even though it added a workload on humans that (from the layperson's perspective) is unnecessary? Perhaps a suggestion of this nature has been discussed previously? Cheers PS. If this place is the wrong venue for taking a holistic approach and here discussion should be limited to only unproblematic suggestions, please direct me to a more appropriate venue and thank you for your time. CapnZapp ( talk) 10:29, 8 January 2021 (UTC)
The website airdisaster.com appears to be used in several articles about aviation accidents, but now links to a spam site/domain hoarder, which seems very undesirable for users. Can someone get the direct links removed and where possible linked to an archived page? In particular where it is linked as an external link, occurrences in references appear to be fixed already Pieceofmetalwork ( talk) 16:07, 9 January 2021 (UTC)
It's done. Example edits: [112] [113] [114] [115], etc.. -- Green C 03:16, 4 February 2021 (UTC)
When you upload an image and choose the option on the list that it is a book cover, it adds book cover to the Licensing section, but you then have to manually add two things to the Summary. It should automatically make Use = Infobox since there is no possible chance there would be anything else. The other required field is Article, which you could easily see which article it was just placed in, and if none found then have a message reminding people to add one. Dream Focus 14:23, 14 February 2021 (UTC)
The following pairs of cleanup templates:
should not be used on the same article; but often are.
We need a bot, please, to remove first template in each of the pairs named above.
The bot should not do this when the templates are section-specific (e.g. {{One source|section|date=October 2020}}
)
The bot should remove {{ Multiple issues}}, where appropriate.
The bot needs to take into account common redirects (for example, {{ More citations needed}} is often used via {{ Refimprove}}; {{ More footnotes needed}} as {{ More footnotes}}, etc.).
This can be done as a one-off and then either run occasionally, or added to one of the regular clean-up tasks.
Other such pairs might be identified in future.
Prior discussion is here . Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 20:51, 21 October 2020 (UTC)
needs more, or better references, and not necessarily in-line ones, then to say that there is
a need for more in-line referencing based on the article's existing sourcesis superfluous, as the articles existing sources have been tagged as insufficient. WT79 ( speak to me | editing patterns | what I been doing) 16:23, 2 November 2020 (UTC)
Two related proposals on the Community Wishlist survey have been rejected as out of scope, so I am putting this note here in case there is anyone interested in taking on a project to keep Wikipedia pages and categories up to date.
Basically, pages on Wikipedia are not refreshed often enough, which means that it can take weeks, months, or longer for category membership to update, or for things like age calculation in infoboxes to work correctly.
When a change is made to a template or module that involves category membership, pages that transclude that template or module require a null edit in order to update their category membership. Because of delays in the job queue, such category membership changes can take weeks, or even months. Even worse, changes to the underlying MediaWiki software that apply categories (e.g. those in Special:TrackingCategories) do not force pages into the job queue, which means that category membership for affected pages can take months, years, or forever.
These delays cause outdated information, missing information, and outright errors to be rendered for readers, and cause editors who are working on fixing problems identified by maintenance categories to be delayed in applying those fixes. When a maintenance category should be populated but is empty, it gives editors the false impression that all affected articles are working properly.
One proposed solution/workaround is to set up a background process that tracks all pages based on their last edit time stamp, including null edits. That tracking could be used to make a list of needed null edits for "stale" pages. There is some detail in the phab links below about how to generate such lists and (possibly) how to force pages into the job queue so that a null-edit bot might not be needed.
For details and links to phabricator tickets, see meta:Community Wishlist Survey 2021/Archive/Set maximum delay in updating category membership and meta:Community Wishlist Survey 2021/Archive/Correct wrong tenure lengths. (Actually, I'll just put the phab links here: T132467, T135964, T157670, T159512.) – Jonesey95 ( talk) 16:34, 7 December 2020 (UTC)
select count(*), SUBSTR(page_links_updated, 1,6) from page group by SUBSTR(page_links_updated, 1,6) order by SUBSTR(page_links_updated, 1,6) desc;
), and probably some variations on it, including that same query limited to article and template space. If we could get a reasonable list of the stalest articles and templates, a bot could null-edit them systematically. –
Jonesey95 (
talk) 16:35, 11 December 2020 (UTC)
page_links_updated IS NULL
and page_touched
is old. They won't have been re-parsed since creation. Unfortunately, the page
table does not seem to be indexed on those columns and I don't see a relevant alternative view.
Certes (
talk) 17:16, 11 December 2020 (UTC)Hello, the Illinois Historic Preservation Agency recently took down their website because it was based on Adobe Flash, breaking lots of links of the format http://gis.hpa.state.il.us/pdfs/XXXXXX.pdf (where X represents a numeral). I just checked a random one, and it was in IA, so the archive bots could run with these URLs, but how do I ask that they work on them? Nyttend ( talk) 13:12, 16 February 2021 (UTC)
We have many articles that have a disambiguated title that are not linked to from a hatnote and are not listed on a disambiguation page. Either editors forgot to add the page to the disambiguation page, or the hatnote was removed in an act of vandalism. Sourdough, Montana (created in 2009) was not accessible from the base title Sourdough until Sourdough (disambiguation) was created in 2020; Drought (disambiguation) was inaccessible from 2018 to 2020.
I'm wondering if this is something that would be worth keeping an eye on by periodically assembling a list. I have no idea if such a list would be too large for anyone to want to go through, maybe an invisible tag similar to {{ orphan}} could be added to these articles?
– Thjarkur (talk) 12:39, 19 January 2021 (UTC)
One of the things I like to do is make infoboxes compliant with
MOS:SMALL and
MOS:POINTS using AWB. For example,
[116] and
[117]. The SMALL fixes are easy, for html tags I just find <small>
and </small>
and leave the "replace with" window blank. For {{
small}} and {{
midsize}}, I use regex. Find ({{small\|)(.*?)(}})
and replace with $2
.
The MOS:POINTS are a bit more challenging. I basically hard-coded a bunch of find and replace rules using regex for common degrees. This way, it doesn't matter if it's typed as "M.B.A." or "M. B. A.", it'll still get changed to MBA.
The problem with AWB is it's not versatile enough for me, at least for my rudimentary skills. For example, in order to limit the find-and-replace to infoboxes, I set the rule as "inside templates", so I still have to make sure it doesn't make any changes to URLs in any of the CS1 templates. Another issue is related to my regex for PhD and PhB. For PhD: (P)(\.?)(\s?)(h)(\.?)(\s?)(d)(\.?)
. This means though that in the infobox for
Marcel Lettre, "
Joseph D. Kernan" becomes "
JosePhD Kernan". I'd like for this task to be done by a bot so that I can make other edits and not have to waste time making sure that these issues don't come up.
Bait30
Talk 2 me pls? 01:49, 22 January 2021 (UTC)
|name=
or similar parameters. –
Jonesey95 (
talk) 16:30, 22 January 2021 (UTC)
|education=
anyways.
Bait30
Talk 2 me pls? 21:28, 22 January 2021 (UTC)
|native_name=
parameter; because |native_name=
is rendered larger than the normal infobox text, the text inside the {{
small}} template ends up rendered at 93.5% of normal, which is perfectly fine and should not be enlarged. –
Jonesey95 (
talk) 22:32, 22 January 2021 (UTC)I request a bot that shall replace all Wikipedia links and the logo with their respective Uncyclopedia links. April Fools! Wikitrumpets ( talk) 04:10, 1 April 2021 (UTC)
I remember seeing this idea mentioned once, but for some reason it was never coded. Let's be honest here, this bot will be pretty accurate. April Fools! Pahunkat ( talk) 07:54, 1 April 2021 (UTC)
This section contains material that is kept because it is considered
humorous. Such material is not meant to be taken seriously. |
The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
Per
WP:Most ideas are bad, most ideas are bad. But editors often forget this. Therefore, I propose a bot to remind them. This bot would use the latest advances in neural network language processing to automatically detect when someone is proposing an idea. It would then leave a message on their talk page something along the lines of "Hi! I'm
User:BadIdeasBot. I noticed that you recently suggested an idea. Please remember that most ideas are bad. On the off chance that your idea is not bad, please disregard this message. Thank you.
" What do you all think? Surely this is idea is one of the good ones, right? - {{u|
Sdkb}}
talk 00:24, 1 April 2021 (UTC)
April Fools!
use the latest advances in neural network language processing. lol. ProcrastinatingReader ( talk) 12:47, 1 April 2021 (UTC)