This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 75 | ← | Archive 80 | Archive 81 | Archive 82 | Archive 83 | Archive 84 | Archive 85 |
WantedPages is pretty useless as it is since it considers links from and to talk pages. Does the requested action above help at all? JsfasdF252 ( talk) 03:06, 5 January 2021 (UTC)
Some older WP:FFD pages are titled WP:Files for deletion instead of WP:Files for discussion. Should a bot be created to move them to the newer title just like how WP:Votes for deletion pages were moved to WP:Articles for deletion for the sake of consistency? P,TO 19104 ( talk) ( contribs) 15:54, 23 January 2021 (UTC)
This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 75 | ← | Archive 80 | Archive 81 | Archive 82 | Archive 83 | Archive 84 | Archive 85 |
Anything with http://www.starwars.com should be changed to https. JediMasterMacaroni (Talk) 17:03, 25 February 2021 (UTC)
FANDOM to Fandom (website), please. JediMasterMacaroni (Talk) 00:57, 24 February 2021 (UTC)
There are some 800+ transclusions of Template:IPC profile. They go to an archive page, because the original link doesn't work, but with the first five I at random checked, the archive page doesn't work either: Scot Hollonbeck, Stephen Eaton, Jonas Jacobsson, Sirly Tiik, Konstantin Lisenkov.
It seems possible to replace the template with Template:IPC athlete: {{IPC profile|surname=Tretheway|givenname=Sean}} becomes {{IPC athlete|sean-tretheway}}. It is safer to take the parameter from the article title than from the IPC profile template though: at Jacob Ben-Arie, {{IPC profile|surname=Ben-Arie|givenname=<!--leave blank in this case, given name not listed-->}} should become {{IPC athlete|jacob-ben-arie}} [1].
If the replacement is too complicated, then simply removing the IPC profile one is also an option, as it makes no sense to keep templates around which produce no useful results. Fram ( talk) 11:34, 5 January 2021 (UTC)
I would love for a bot or script which could allow users to turn a page's citations which include a URL to Twitter into instances of {{ cite tweet}}. – MJL ‐Talk‐ ☖ 20:17, 3 February 2021 (UTC)
This bot is needed for fixing the grammatical errors. I noticed there were a no. Of grammatical errors in pages which was not attended by any users or administrators. — Preceding unsigned comment added by Kohcohf ( talk • contribs)
Trying to browse through and improve the drafts at Special:AllPages/Draft: is hard because there are so many redirects. Is it possible to create and maintain a category of drafts that are not redirects for easier navigation?
Edmonds community college is now known as edmonds college. I request a bot that changes all articles saying Edmonds Community College to fix the wikilink to change to Edmonds College. This is a college in Washington state, USA. 2601:601:9A7F:1D50:1D79:66DB:8571:CFA7 ( talk) 09:07, 2 April 2021 (UTC)
OK 2601:601:9A7F:1D50:1D79:66DB:8571:CFA7 ( talk) 09:15, 2 April 2021 (UTC)
Links die on the internet. I request a bot that checks if a citation reference to the Internet is mirrored on the Internet Archive to then rewrite the link to its link in the Archive as that won't suffer from Link Rot. This preserves references on Wikipedia which may vanish over time because of Link Rot. Big job! I know! Smart coder needed. 09:13, 2 April 2021 (UTC)09:13, 2 April 2021 (UTC)09:13, 2 April 2021 (UTC)09:13, 2 April 2021 (UTC)09:13, 2 April 2021 (UTC) 2601:601:9A7F:1D50:1D79:66DB:8571:CFA7 ( talk) --- This could be used to really fix the dead links backlog by hunting to see if the link was mirrored and then rewriting it to the archive to show the link. --- A second idea is to check if the link is mirrored and if so then to add language giving a backup link on the archive which says to go to the link on the Archive if/in case the first reference link has died from being removed from the Internet.
I want to request this bot called "EnergyBot". EditJuice ( talk) 16:16, 7 April 2021 (UTC)
The bot would make talk page archives. EditJuice ( talk) 16:19, 7 April 2021 (UTC)
No, I don't even have an archive. I request the bot for later, when I will have an archive already. EditJuice ( talk) 17:15, 7 April 2021 (UTC)
The site sentragoal.gr has been hijacked by a gambling site and we should be looking to deactivate active reference links to that source. If someone is able to manage that easily, that would be fantastic. — billinghurst sDrewth 23:55, 13 February 2021 (UTC)
|url-status=usurped
and other links should... have what happen to them? --
Izno (
talk) 00:44, 14 February 2021 (UTC)
Notice of this request has been posted at WT:LANG, and has received only positive comments (thanks or text).
I formatted an example by hand at Dâw language. There are a bit over 3000 URLs to link to. They provide demographic data and reliable sources for the languages, and are an alternative to Ethnologue, which is now behind a very expensive paywall. (And in some cases ELP is a check on Ethnologue, as the two sites often rely on different primary sources and often give very different numbers.)
Last time I did something like this it was handled by PotatoBot, but Anypodetos tells me that's no longer working.
Add links to the Endangered Languages Project (ELP) from our language articles through {{ Infobox language}}, parallel to the existing links to other online linguistic resources (ISO, Glottologue, AIATSIS, etc.)
The list of ELP language names and associated ISO codes and URLs is here. I would be happy if the entries in the table with single ISO codes were handled by bot. I can do the rest by hand, but see below.
There are three columns in the table. Two contain values for the bot to add to the infobox. The third is for navigation, an address for the bot to find the correct WP article to edit.
The bot should add params "ELP" and "ELPname" to the infobox, using the values in the columns 'ELP URL' and 'ELP name' in the data table.
The value in the column 'ISO code' is to verify that the bot is editing the correct WP article. The bot should follow the WP redirect for that ISO code and verify that the ISO code does indeed occur in the infobox on the target page.
For example, say one of the entries in the data table has the ISO code [abc]. The WP redirect for that code is ISO 639:abc. That should take the bot to a language article, and the bot should verify that the infobox on that article does indeed have a param ISO3 = abc or lc[n] = abc (where [n] is a digit).
If there isn't a match (and it's been years since we've run a maintenance bot to verify them all), then that ELP entry should be tagged as having a bad WP redirect for the ISO.
There is sometimes more than one ISO code per language infobox, because we don't have separate articles for every ISO code. (This is where the params lc[n] come in.) If the bot finds that there's already an ELP link in the box from a previous pass, then it should add the new codes as ELP[n] and ELPname[n], and keep a list so we can later code the template to support the article with the largest number [n] of links.
There is occasionally more than one infobox in a WP language article. It would probably be easiest if I did such cases by hand, since there are probably very few of them (if any), unless the bot can determine which infobox on the page contains the desired ISO code.
The bot should test that the external URL lands on an actual page. For instance, a language in the data table is listed as having URL 8117, but following 8117 gets the error message "Page not found :(". Such bad URLs should be tagged both for this project and for submission to the ELP.
If the programmer of the bot wishes to, it would be nice if they could do a run for the 40+ ELP entries that each have 2 ISO codes. (Or have three, if the coding is easy enough, but there are only 16 of those. Anything more than that I should probably do by hand.) If the rd's for those two ISO codes both link to the same Wikipedia article, then the ELP params should be added as above. If they link to different articles, they should be tagged and I'll do them by hand.
Please ping me if you respond. — kwami ( talk) 11:14, 16 January 2021 (UTC)
{{#invoke:WikidataIB |getValue |ps=1 |P2192 |qid=Q3042278}}
→ 2547{{#invoke:WikidataIB |getValue |ps=1 |P2192 |qid=Q3042278 |qual=P1810 |qo=y}}
→ DâwSorry, I didn't follow any of that. I don't see any of the data at Wikidata. E.g., I can't tell which are the 7 pages with multiple IDs, or how it was determined which page gets which ELP ID. — kwami ( talk) 08:13, 28 January 2021 (UTC)
ELP
and ELPname
parameters will remain unchanged, and every article that doesn't have those parameters set will try to fetch them from the corresponding Wikidata entry and use those. Please let me know if you need more explanation. --
RexxS (
talk) 13:30, 28 January 2021 (UTC)Thanks, @ RexxS:! That looks great!
Where would we go to update the ELP values?
Could you generate a list of ELP ID's with single ISO codes that are not being triggered, so I could fix them manually? I've noticed severl, but would rather not search all 3000 to check.
Could you add a name to the refs so we could call them with <ref name=ELP/>, <ref name=ELP2/>? And could you add a link to Category:Language articles with manual ELP links
for articles that have a value in ELP? (I've done it for ELP2 in the template.)
A slight hiccup, when ELP is entered manually without ELPname, nothing displays. Something should show, if only to alert editors that the infobox needs to be fixed.
BTW, see Yauyos–Chincha Quechua, where there is a second, partial match. (The only ELP article said to be a subset match to an ISO code.) I used ELP2 to add that to the automated link.
Gelao language has up to ELP4. — kwami ( talk) 22:08, 28 January 2021 (UTC)
@ RexxS: Actually, I do prefer Wikidata, but I didn't know how & where to go about modifying it.
I think there will still be some need to augment it manually, though. In other language WP's, they may decide to follow ISO divisions where we do not, or have other differences in scope that would not be appropriate in WD. So, unless there's a work-around (I'm not familiar with WD), we should probably have the universal elements in WD for every WP to access, and then manual overrides when some particular WP wishes to diverge from that, for whichever reason. (E.g. deciding that ISO or ELP is inaccurate, based on the sources used for an article.) Wouldn't putting everything in Wikidata cause conflicts between different-language WPs?
Also, how can we generate a list of the ELP ID's that are called in WP-en, so I can fix the ones that aren't? — kwami ( talk) 01:17, 30 January 2021 (UTC)
That reduces it to about 500 articles I need to check by hand or manually add to WikiData. — kwami ( talk) 09:29, 2 February 2021 (UTC)
@ RexxS, Vahurzpu, and The Earwig: In the bottom section at Wikipedia talk:WikiProject Languages/List of ELP language names (#Names in the 'Languages with single ISO codes' ...) are the 500+ ELP name that should be linked from WP articles but aren't. Sometimes that's because the WP article covers more than one ELP language, but other times I don't see why there's no link. Maybe just a mismatch in names?
Would it be possible to add those ELP names & links to the WP articles through Wikidata? (To the WP articles that those blue-linked ELP names redirect to.) I've done a few manually, and can revert that once they're in Wikidata. — kwami ( talk) 04:20, 3 February 2021 (UTC)
|ELP=4225|ELP2=1744
to the template, we add |from=Q12953229|from2=Q12633994
, and the template pulls the ELP IDs from there. This has an advantage of making it easy to maintain other dialect identifiers if we choose to move more identifiers to Wikidata. I am not sold on this approach, but wanted to propose it. —
The Earwig ⟨
talk⟩ 07:17, 14 February 2021 (UTC)
There appear to be many many thousands of articles that are missing {{ use mdy dates}} or {{ use dmy dates}}, but which are linked to information that is sufficient to determine which tag should be used. For instance, I think we can safely assume that an untagged page for a high school in a subcategory of Category:High schools in the United States (or with country (P17) = United States of America (Q30) on Wikidata) ought to be using MDY, or that an untagged British biography page not in any categories for expatriates or dual nationality ought to be using DMY. The 3500 pages that use {{ American English}} but have no tag seem like an even easier call.
I'd like to see a bot that goes through old pages and adds the appropriate tags where it can make a firm determination. It would then operate periodically to add DMY or MDY tags to new pages as they are created (but would not override any pages tagged manually). This would help reduce the incidence of the ugly 2021-02-15 dates, and save some amount of editor work. It would be very low-risk, as even if there's some unforeseen circumstance that causes the bot to occasionally mess up, there's very little damage done (e.g. Americans can still understand DMY fine, likewise for Brits with MDY, and most would probably prefer either to YYYY-MM-DD) and correction would be easy.
Does anyone want to take this on? {{u| Sdkb}} talk 23:27, 15 February 2021 (UTC)
based on strong national ties to the topic, which would be the case for the categories here. {{u| Sdkb}} talk 05:43, 16 February 2021 (UTC)
Hi there,
I am looking to for a bot to update a company page [1] with shows that are released as per the corresponding imdb page [2], is this possible? I apologise if this is not the place for this kind of request, I am new to using bots.
Many thanks
MonkeyProdCo ( talk) 15:48, 16 April 2021 (UTC)
References
I'm very busy in real life, and I need a bot to do the editing for me, he will use AutoWikiBrowser and help with vandalism, edit warring, and other problems and situtations. This will not be my only bot, I am also requesting to have 3 bots. He will have a user page, although I do not have one. This will be a good faith bot. If he malfunctions Press the emergency shutoff button. — Preceding unsigned comment added by BCuzwhynot ( talk • contribs)
Hello, I was hoping that a new bot could be created to do what Hasteur Bot used to do which was to notify editors that their drafts were coming up on their 6 month period of no activity when they could be deleted as stale drafts (CSD G13). These notices were sent out after a draft had been unedited for 5 months. We have been missing this since the summer which has resulted in what I think is a higher number of draft deletions and a high volume of requests for restoration at WP:REFUND. I think oftentimes, editors forget that they have started a draft (especially those editors who start a lot of drafts simultaneously), and these reminder notices are very useful for page creators as well as for editors and admins who regularly patrol stale draft reports.
Would it be possible for a bot creator to just reuse code from Hasteur Bot? But I'm just looking for a bot that will do exactly what it used to before it was disabled due to the bot creator's passing. See Special:Contributions/HasteurBot for examples of what I'm looking for. Thank you. Liz Read! Talk! 00:18, 7 February 2021 (UTC)
I'd say more than a few articles are marked stubs but assessed differently by Wikiprojects. Is it a good idea to maintain a (possibly cached) list of these for maintenance purposes. — Preceding unsigned comment added by 5a5ha seven ( talk • contribs) 23:17, 19 February 2021 (UTC)
~~~~
. Or, you can use the
[ reply ] button, which automatically signs posts.)
GoingBatty (
talk) 23:45, 19 February 2021 (UTC)Sections with the names of the months of the year are repeated twice or thrice, in "Events", "Births", and "Deaths". To make section headings unique, I propose the following changes be made:
replace regex
(\w)( ?)===
with
$1 {{none|births}}$2===
or
$1 {{none|deaths}}$2===
depending on the section. JsfasdF252 ( talk) 17:31, 5 February 2021 (UTC); updated 17:38, 5 February 2021 (UTC)
Per MOS:BOLD, "boldface is applied automatically [in] [t]able headers. Manually added boldface markup in such cases would be redundant and is to be avoided.". Special:Search/insource:/\|\+ *'''/ currently returns over 17,000 results. I suggest replacing
\|\+( *)'''([^'\|]+)'''\|
by
|+$1$2|
in all of the occurrences. 𝟙𝟤𝟯𝟺𝐪𝑤𝒆𝓇𝟷𝟮𝟥𝟜𝓺𝔴𝕖𝖗𝟰 ( 𝗍𝗮𝘭𝙠) 22:13, 21 February 2021 (UTC)
I am proposing a bot that will do the following task: It will add the template {{R to section}} to all the redirects on Wikipedia that link to a section. Yesterday I made almost 200 edits trying to add the template, and I was later inspired to add this request by this diff. 🐔 Chicdat Bawk to me! 11:13, 23 April 2021 (UTC)
Based on this search, there appears to be around 50,000+ articles (mostly on populated places in Iran) created by User:Carlossuarez46 that use the incorrect capitalization of "romanized".
The word is capitalized when it means "to make something Roman in character", and lowercase when it means "convert to Latin script", as it does in these cases. This distinction is reflected across all of our articles ( Category:Romanization), and is supported by dictionaries [2], encyclopedias [3] [4] [5], etc.
If a bot task were made to fix this, it would also probably be prudent to retarget the wikilinks like so: [[Romanization of Persian|romanized]], which is a better target. — Goszei ( talk) 23:56, 17 February 2021 (UTC)
Languages Usually Transliterated (or Romanized)). The title of the previous section, for example, is written as
Languages Using the Latin Alphabet.
In nonspecialized works it is customary to transliterate—that is, convert to the Latin alphabet, or romanize—words or phrases from languages that do not use the Latin alphabet.— Goszei ( talk) 00:31, 18 February 2021 (UTC)
(Romanized as [one or more words in Farsi script])
to reduce the risk of false positives.
Nyttend (
talk) 15:21, 18 February 2021 (UTC)
{{lang-fa|بابصفحه}}
part of the article text could be detected to ensure that the context is correct. I'm no good at regex, but the string {{lang-fa|anything goes here}}, also
Romanized as"
would be the target for making the two changes (decapitalization of romanization, and changing the link target). There may be some cases left over, but likely a reasonable amount that is fit for an AWB pass (i.e. with human review) instead of a bot run. —
Goszei (
talk) 17:58, 1 March 2021 (UTC)Wikipedia currently contains source references in several languages to the websites TracesOfWar.com and .nl (EN-NL bilingual), but also to the former websites ww2awards.com, go2war2.nl and oorlogsmusea.nl. However, these websites have been integrated into TracesOfWar in recent years, so that the source reference is now incorrect in approximately 1,200 pages and a multiple of the source references. Fortunately, there is currently the situation in which ww2awards and go2war2 still link to the correct page on TracesOfWar (redirect), but this is no longer the case for oorlogsmusea.nl. I have been able to correct all the sources for oorlogsmusea.nl manually.
For ww2awards and go2war2 the redirects will stop in the short term, which will result in thousands of dead links, while it can be properly directed towards the same source. Now I have started to make some changes myself by converting sources for these 2 sites as well, but after about a dozen of changes I somewhat lose hope to do this manually at least 1,150 times.
Is there any way this could possibly be done in bulk? A short example: person Llewellyn Chilson (at Tracesofwar persons id 35010) now has a source reference to http://en.ww2awards.com/person/35010, but this must be https://www.tracesofwar.com/persons/35010/. In short, old format to new format in terms of url, but same ID.
In my opinion, that should make it possible to convert everything with format ' http://en.ww2awards.com/person/[id]' (old English) or ' http://nl.ww2awards.com/person/[id]' (old Dutch) to ' https://www.tracesofwar.com/persons/[id]' (new English) or ' https://www.tracesofwar.nl/persons/[id]' (new Dutch) respectively. The same applies to go2war2.nl, but with a different format slightly. The same has already been done on the Dutch Wikipedia, via a similar bot request. Is this possible? Lennard87 ( talk) 10:57, 29 April 2021 (UTC)
Is it possible for a bot to change the following project talkpage templates?:
{{
WikiProject Floods}}
to {{
WikiProject Weather|floods-task-force=yes}}
{{
WikiProject Weather Data and Instrumentation}}
to {{
WikiProject Weather|met-data-task-force=yes}}
{{
WikiProject Droughts and Fire Events}}
to {{
WikiProject Weather|droughts-and-wildfires-task-force=yes}}
It would be rather time-consuming to change all of these by hand and the templates dont exactly agree with the changes I made to avoid changing the talkpage templates. Noah Talk 22:48, 12 April 2021 (UTC)
{{
Flood}}
and {{
Weather-data}}
? Thanks!
GoingBatty (
talk) 05:09, 16 April 2021 (UTC)Please could someone replace ELs of the form
with
{{
Cite rowlett|bhs}}
which produces
Thanks — Martin ( MSGJ · talk) 05:38, 19 March 2021 (UTC)
{{
cite web}}
, plain links, and links with piped text, and with/without additional plain bibliographic notes. For example, 165 of the https:// form are in a "url=..." context. I think there are too many variations to do automatically.
DMacks (
talk) 15:06, 19 March 2021 (UTC)
SEE Wikipedia bot requests #internet archive. A guy replied about the bot preppery. Tell that bot owner to come here and look at this conversation
Is there an existing bot that could add the missing Template:Documentation to other templates' pages? Are there any prior discussions? -- DaxServer ( talk) 12:20, 2 May 2021 (UTC)
| content =
of
Template:Documentation#Usage. Before considering a bot you would need consensus to stop that practice. It's common for simple documentation.
PrimeHunter (
talk) 15:10, 2 May 2021 (UTC)
I want a robot to be able to add {{ Bare URL inline|{{ subst:DATE}}}}to the end of all bare URLs.-- Alcremie ( talk) 09:54, 11 March 2021 (UTC)
The Cleveland Clinic Journal of Medicine used to be free and open, but now it is free with registration. Can a bot be modified to make edits like
this, where I removed things like the | doi-access = free
parameter and I added the | url-access= registration
parameter? I imagine a complicating factor might be having the bot generate urls. This is my first bot request from memory so my apologies if you feel I'm wasting your time by making this request. Thank you.
Biosthmors (
talk) 06:58, 16 April 2021 (UTC)
I've recently gone through all ACW unit pages and manually adjusted pagenames in order to standardize usage according to this discussion. After I performed these page moves, this discussion made clear I needed to repetitively replace a vast number of category entries in the form "X Y Civil War regiments" toward "X Y Civil War units" on appropriate unit articles, including their container categories. The top-most container categories for these changes are located at Category:Regiments of the Union Army and Category:Regiments of the Confederate States Army, which themselves need to be changed to "Units of the Y Army". There are some current state ACW regiment categories which omit "Confederate States" and "Union" merely because no units raised in the state were part of that army (for example: "Vermont Civil War regiments" and "Mississippi Civil War regiments"). In all cases, "regiments" should be changed to the more inclusive plural noun "units". This would take several thousand word replacements (in category names) at a page level. Is this something a bot might do? Is there a better way? BusterD ( talk) 18:45, 21 April 2021 (UTC)
On articles protected by pending changes, if there are no pending edits to an article, then autoconfirmed users should be able to have their changes automatically accepted. There is currently a rather frustrating bug that causes some edits by autoconfirmed users to be erroneously held back for review: please see Wikipedia:Village pump (technical)#Pending Changes again and various phab tickets [6] [7]. Apparently, the flagged revisions/pending changes codebase is completely abandoned (no active developers who understand the code), and currently no timely fix to this issue is anticipated. As an interim stopgap measure while we attempt to find developers to fix the underlying software, would it be possible to create a bot that automatically accepts pending changes made by autoconfirmed users where they should have been automatically accepted by the software? Thanks, Mz7 ( talk) 23:41, 12 March 2021 (UTC)
+reviewer
so I can experiment with the relevant API calls. The bot account this task runs under will obviously need this eventually as well, but that's for after the BRFA of course.
ƒirefly (
t ·
c ) 11:03, 14 March 2021 (UTC)
BRFA filed ƒirefly ( t · c ) 17:33, 15 March 2021 (UTC)
This is a request specifically to benefit external wikis who import Wikipedia templates for their own use. Currently, the redirect {{ Doc}} is transcluded onto over 3000 templates. This means that any wiki which imports one of these templates will also get a template-space redirect that they may not want, or at least a redlink that they then have to fix (if they happen to care about that). I don't think this is explicitly covered by any of the points at WP:NOTBROKEN, but it feels to me at least to be within the spirit of the second-to-last "good reasons" point:
In other namespaces, particularly the template and portal namespaces in which subpages are common, any link or transclusion to a former page title that has become a redirect following a page move or merge should be updated to the new title for naming consistency.
If the community here decides (or has decided) that reuse on external wikis isn't a major enough concern to justify this type of change, that's fine. I personally think it's worth changing this, though as a reuser at one of those external wikis, I'm obviously biased here. =) 「 ディノ奴 千?!」 ☎ Dinoguy1000 07:01, 12 March 2021 (UTC)
I want to make bots — Preceding unsigned comment added by 2601:246:5980:6240:8D01:A7E1:9D68:EE5C ( talk) 16:49, 12 May 2021 (UTC)
Ask you to review the article "ٱفلام حصرية" Thanks — Preceding unsigned comment added by 196.151.130.174 ( talk) 12:33, 18 May 2021 (UTC)
Please CfD-tag all categories in the following bulk nomination(s):
– LaundryPizza03 ( d c̄) 03:39, 16 May 2021 (UTC)
The article about Estradiol as a hormones contains an non working external link in the references. On the reference number 71., the last external link as a PDF is citing the values from this source " Establishment of detailed reference values for luteinizing hormone, follicle stimulating hormone, estradiol, and progesterone during different phases of the menstrual cycle on the Abbott ARCHITECT analyzer".
This external link redirects to an 404 server error and needs to be replaces with an operating link. The original research document is available on the laboratory's website.
How to change this link? I don't know how to use a bot. I'm thankful for any help. — Preceding unsigned comment added by Jerome.lab ( talk • contribs) 13:11, 30 April 2021 (UTC)
The goal is to remove "color =" and "popularity=" parameters from {{ Infobox music genre}}. Color parameter was suppressed in January 2019 [8], while popularity was removed in 2013 [9], but they are still present in ~900 and ~300 templates respectively [10]. It would be great if we could clean up these templates. Solidest ( talk) 17:06, 3 March 2021 (UTC)
This did happen before, and is a bit of an issue. There are apparently some really old reports created by COIBot out there that are not NOINDEXed and which appear in the Google search results. As far as I know we did solve that some time ago in the robots.txt, but I am not sure whether those really old, untouched reports actually 'get' that properly set through (and I am not sure whether a bot-run is necessary).
Can a bot go through all pages under Wikipedia:WikiProject Spam/LinkReports, Wikipedia:WikiProject Spam/Local (so e.g. Wikipedia:WikiProject Spam/LinkReports/example.org etc.) and add {{ NOINDEX}} to any pages that are currently in the category for no-index (I would not know how to find that in the first place). The edit will then enforce the parsing of the page and make sure that from our side the pages are NOINDEXed. If all are NOINDEXed all of them should probably be purged. We could consider to delete them, but some are representations of evidence that is used for the decisions to blacklist (but nothing is really lost, the bot can recreate them with data over the last 10 years, and since admins are handling the cases they can always see deleted revids.
A second step would be to contact Google to remove those that have not been re-indexed (and hence removed) by Google from the Google database, but that is probably something that needs to be done on a case-by-case basis so also not a bot task. It is an advice that we then can give to anyone who 'complains'. Thanks. -- Dirk Beetstra T C 13:52, 2 May 2021 (UTC)
Hi, I have seen many articles that seem to get WP:LAYOUT wrong. For example, placing 'See also' after 'References', 'External links' placed before 'References'. I think it is easy for bots to read the layout and correct them. Perhaps, an existing bot can be programmed to do that. Regards-- Chanaka L ( talk) 10:42, 1 April 2021 (UTC)
== *References *==[^=]+== *See also *==
and saves only if this code no longer exists after applying the genfixes. A similar bot could look for articles with == *External links *==[^=]+== *References *==
that saves only if the code no longer exists after applying the genfixes. What do you think?
GoingBatty (
talk) 04:18, 7 May 2021 (UTC)
The Dispute Resolution Noticeboard has had a bot-maintained table of the status of cases for several years, and this table can be transcluded onto user pages, and onto the main status page of DRN. This table should be updated a few times a day. This task was previously done by User:HasteurBot, but that bot has been retired from service because its operator is no longer in this world. This task was, until about ten days ago, done by User:MDanielsBot, but that bot has stopped doing that task. It is doing other tasks, but not that task. Its bot operator is on extended wikibreak and did not respond to email. I have spoken to one bot operator who is looking into this task. Robert McClenon ( talk) 16:21, 13 April 2021 (UTC)
Please tag all of the following articles included in Wikipedia:Articles for deletion/List of names of European cities in different languages:
– LaundryPizza03 ( d c̄) 21:27, 17 June 2021 (UTC)
Could a bot be created to add/update {{
Top 25 report}} to the talk pages of pages featured in the
Top 25 reports, it would be useful if the bot could also do the annual top 50 report and if the bot could go through the old top 25 reports as a few are missig thier talk page banners. Thanks,
SSSB (
talk) 09:20, 18 April 2021 (UTC)
Please, rename "Stadio Pierluigi Penzo" to "Stadio Pier Luigi Penzo" in this pages. Thanks in advance!!! -- 2001:B07:6442:8903:D4D:F67B:CF18:C681 ( talk) 13:36, 14 June 2021 (UTC)
Hi. I'm looking to revive a request previously made in 2018, which was discussed (to a considerable extent) here and here. Back then, TheSandDoctor originally offered to help, but due to other circumstances was unable to devote time to the task, and suggested that I ask here again. I've left it for quite some time, but better late than never I guess.
Briefly, names should be sorted by given name (i.e. as they appear) in Thailand-related categories. A Thai biography footer should as such contain the following:
{{DEFAULTSORT:Surname, Forename}} [[Category:International people]] [[Category:Thai people|Forename Surname]]
Currently, compliance is all over the place, with the Thai order being placed in the DEFAULTSORT value in some articles, and the Thai sort keys missing in others. A bot is needed to: (1) perform a one-time task of checking DEFAULTSORT values in Thailand-related biographies (a list with correct values to be manually supplied), and replacing the values if incorrect, and (2) do periodical maintenance by going through a specified list of categories (probably via a tracking template placed on category pages) and adding the Thai name order as sort keys to those categories' calls in each member article that is a biography. In most cases, the Thai name order would be the page title, but there are several exceptions, which I will later elaborate upon. This had been advertised and some details of the task ironed out back then, but since it's been three years there may be need to reaffirm consensus. I would like to see first, though, whether any bot operators are interested in such a project. -- Paul_012 ( talk) 00:13, 26 February 2021 (UTC)
{{DEFAULTSORT:Surname, Given name}}
[[Category:Category name|Given name Surname]]
[[Category:Category name|Category Sort key specified]]
{{DEFAULTSORT:Surname, Given name}}
[[Category:Category name|Given name Surname]]
Maybe I should provide a bit more background first. The short answer your last question would be, "No." To get the long answer, I went through the about 4,000 Thai people articles to identify the following patterns:
lengthy name examples
|
---|
|
I guess all this is to say it's probably far too complicated for the defaultsort value to be automatically processed; reading off a manually compiled list would be more practical. I'm still tweaking the list but see for example an earlier (outdated) version at Special:Permalink/829756891.
I think the process should be something more like:
[[Category:Category name|PAGENAME]]
(though format the page name to exclude parenthetical disambiguators)The above applies to the bot's initial run. There should also be periodical update runs, where 2.1 would be:
[[Category:Category name|PAGENAME]]
(though format the page name to exclude parenthetical disambiguators)Category recursion is tricky and can lead to unexpected problems, so {{ Thai people category}} should probably be placed directly on all applicable category pages. (That may also be a bot task.) I'm working off this preliminary list: Special:Permalink/1011801926, but some further tweaks my still be needed.
Since the Thai sort key will be the same as either the article title (for regular names) or the DEFAULTSORT value (for royalty, etc.), the DEFAULTSORT_UPDATE_LIST can note which case applies to each article, and this can be tracked in the article source. I think this would be preferable in the long run, as a central list will be hard to keep updated while a tracking template can be added to new articles as they are created. {{ Thai sort same as defaultsort}} wouldn't need to generate any visible output (except maybe a tracking category if useful).
Does this more or less make sense? -- Paul_012 ( talk) 23:15, 12 March 2021 (UTC)
{{Thai sort same as defaultsort}}{{DEFAULTSORT:Surname, Given name}}
(nothing between the template and DEFAULTSORT)? --
Kanashimi (
talk) 01:26, 13 March 2021 (UTC)[[Category:Category name|PAGENAME]]
(though format the page name to exclude parenthetical disambiguators)Given-name Surname
as the DEFAULTSORT (which the bot will need to correct) is quite old (mostly found in articles from over a decade ago I think). New articles today will likely have DEFAULTSORT values in the Surname, Given-name
format, so will only need PAGENAME sort keys added. The minority of articles which require specific formatting and tagging can be handled by patrollers following WikiProject Thailand's potential new articles feed as they are created. --
Paul_012 (
talk) 09:13, 13 March 2021 (UTC)
I've opened a discussion requesting community input at Wikipedia talk:Categorization of people#Bot for Thai name category sorting. I've now also listed the categories and articles at Wikipedia:WikiProject Thailand/Thai name categories and Wikipedia:WikiProject Thailand/Thai name sort keys. -- Paul_012 ( talk) 18:54, 16 March 2021 (UTC)
@ Paul 012: Sorry, it seems there are some pages modified during the interval we waiting for the task approved. Can you check and update Wikipedia:WikiProject Thailand/Thai name sort keys again? Thank you. For example,
And how do we deal with the pages moved in the future? When the pages moved, the sort key will not follow the changing. -- Kanashimi ( talk) 00:36, 31 May 2021 (UTC)
I know I'm late to the party but would it make any sense to sort the Thai names as [[Category:Thai foos|{{PAGENAME}}]]
(literally
the word PAGENAME in braces) so they will update automatically if the page name changes? That would include parenthetical qualifiers, but consistently sorting Foo bar (footballer born 1900) before Foo bar (footballer born 2000) might not be a bad thing. Non-Thai names in Thai categories could either follow suit to sort consistently (often by given name) or simply omit the sort code to sort by DEFAULTSORT (normally surname).
Certes (
talk) 09:45, 2 June 2021 (UTC)
Paul_012 I start running the routine version, it modifying 2 pages. Please check this round. -- Kanashimi ( talk) 23:31, 3 June 2021 (UTC)
There's current a boatload of raw IPv4/ IPv6 address used in URLs, instead of something legit useful to readers. Is there a way to parse/update a link like
to
by bot? Or something similar/close to this? I fully expect most such links to not be recoverable, but there could be a few that are. Headbomb { t · c · p · b} 23:01, 28 May 2021 (UTC)
google.com
just resolved to 172.217.7.14
for me, and while
https://172.217.7.14/search?q=earwig seems to work and could theoretically end up in an article somehow, that IP's reverse DNS is lga25s56-in-f14.1e100.net
which is clearly not what we want. Certainly tools could be used to build lists of possible replacements that could be manually reviewed, and a bot could perhaps operate on that, or we could use an existing method like
WP:URLREQ. —
The Earwig (
talk) 00:18, 29 May 2021 (UTC)
bot for linking stuff in wikipedia
FizzoXD (
talk) 03:31, 9 June 2021 (UTC)
By linking things i mean like linking to other wikipedia articles. Like if there is a word link "
internet meme" the bot would like it to the page by editing it.
FizzoXD (
talk) 05:31, 9 June 2021 (UTC)
I would like to request that a bot starts putting a project-box on all articles that appear in the Wikipedia:The 2500 Challenge (Nordic). Plenty of other projects has this kind of box at the articles talk pages like at Talk:Gunnar Seijbold. The project is growing bigger. BabbaQ ( talk) 11:59, 5 May 2021 (UTC)
Hopefully once a bot has gone through those articles, there may only be a few additional cases that I can manually fix. Unfortunately 700 is too much for me to do manually :(. Thanks I hope! -- Tom (LT) ( talk) 09:45, 2 April 2021 (UTC)
I really thought we had a bot or several working on this, and it seems it was brought up as recently as last year, but I just had to make yet another manual fix, so... We really need a bot that reliably fixes section links when section names are changed. Preferably one that stays online for more than a few weeks before it stops working. understatement {{u| Sdkb}} talk 06:58, 21 May 2021 (UTC)
|notify=Ladsgroup
to the bot's config line on
Wikipedia:Bot activity monitor/Configurations. –
SD0001 (
talk) 07:06, 21 May 2021 (UTC)
{{ demo inline}} is similar to {{ tbullet}}, but the former supports an infinite amount of named and unnamed parameters. {{ tbullet}} is more widely used, but can only support 6 unnamed parameters. I suggest replacing this:
{{tbullet|t|1|2|3|4|5|6}}
with this:
* {{demo inline|<nowiki>{{t|1|2|3|4|5|6}}</nowiki>}}
JsfasdF252 (
talk) 22:27, 24 April 2021 (UTC)
Hi all, the WP:Featured and good topic candidates promotion/demotion/addition process is extremely tedious to do by hand, and having a bot help out (akin to the FAC and FLC bot) would do wonders. Unfortunately, this would have to be a rather intricate bot—see User:Aza24/FTC/Promote Instructions for an idea of the promotion process—so I don't know if many would be willing to take it up. But regardless, such a bot is long over due, and its absence has resulted in myself, Sturmvogel 66 and GamerPro64 occasionally delaying the promotion process, simply because of the discouraging and time consuming manual input needed. I can certainly provide further information on the processes were someone to be interested. Aza24 ( talk) 01:14, 4 May 2021 (UTC)
{{ demo inline}} is similar to {{ tbullet}}, but the former supports an infinite amount of named and unnamed parameters. {{ tbullet}} is more widely used, but can only support 6 unnamed parameters. I suggest replacing this:
{{tbullet|t|1|2|3|4|5|6}}
with this:
* {{demo inline|<nowiki>{{t|1|2|3|4|5|6}}</nowiki>}}
JsfasdF252 (
talk) 22:27, 24 April 2021 (UTC)
Following up from
this discussion about converting links to Wikimedia commons from http → https, it was decided a better option is to convert "external" links (i.e only those enclosed in [...]) to interwiki links since it provides better protection against
WP:LINKROT. For example [http://commons.wikimedia.org/wiki/File:Example.jpg Wikimedia commons]
→ [[:commons:File:Example.jpg|Wikimedia commons]]
There are currently about 4,100 main space pages that use http or https external link to commons. Most of them can be replaced with interwiki link. ಮಲ್ನಾಡಾಚ್ ಕೊಂಕ್ಣೊ ( talk) 17:58, 28 May 2021 (UTC)
[[:commons:File:Example.jpg|Wikimedia commons]]
?
Primefac (
talk) 20:04, 28 May 2021 (UTC)
There are 210 talk pages that transclude both {{ GA}} and {{ article history}}. A bot could integrate the GA template data into the latter, to reduce template clutter. – SD0001 ( talk) 18:15, 1 June 2021 (UTC)
The March 2021 cleanup backlog for the Medicine WikiProject is currently dead links on articles that start with the letter A. About a quarter/third or so of the list was dead links in "cite journal" and "cite book" templates ( Template:Cite journal and Template:Cite book) that contain identifiers such as ISBN, DOI, or PMID. A URL is not necessary in these references because identifiers are used. Using the March backlog as a sample and considering the size of the dead link category for the Medicine WikiProject as a whole (currently around two thousand), there are potentially thousands of dead links site-wide that fall into this type of dead link. Removing a single one of these dead links is simple but finding all of them and making a large number of tedious edits is very time-consuming, so this seems like a task a bot could do. Note that |access-date and other URL-related parameters would also be removed. An example of what the bot edits would look like. Velayinosu ( talk) 04:09, 25 March 2021 (UTC)
|doi=
, |jstor=
, |pmc=
, or |pmid=
). If you have questions/concernes, let me know. Thanks!
Ajpolino (
talk) 15:12, 14 May 2021 (UTC)|url=
but there are many other places in a template a URL might be located. See the CS1|2 Lua Configuration source and search on "url". Since it has a {{
dead link}}
it unlikely to have a |archive-url=
+ |archive-date=
+ |url-status=
.. but I have seen it, the possibility exists, they should be removed as well. Let's see.. it could end up removing a dead URL that can be saved via Wayback and this Wayback contains the full PDF, while the DOI link doesn't contain the full PDF. One way to tell is if the template has a |doi-access=free
which flags the full PDF is freely available at the identifier-produced URL. Pinging Nemo who is more knowledgeable.. @
Nemo bis:. --
Green
C 16:53, 14 May 2021 (UTC)
|doi-access=
is used relatively rarely (unless a bot has been adding it?), but I aagree it's the most straightforward task. So let's see how wide a net that is, and if we then want to test a broader set of restrictions we can do so. Thanks again William Avery!
Ajpolino (
talk) 20:32, 20 May 2021 (UTC)I am looking for anyone to take a task of replacing signatures of
PumpkinSky (
talk ·
contribs). Their old signature had <font>...</font>
tags which are creating obsolete html tag Lint errors in all pages that have their signature.
Regex search shows that the signature is currently in 1,081 pages across namespaces. To remove the errors, the font tags need to be replaced with span tags.
[[User:PumpkinSky|<font color="darkorange">Pumpkin</font><font color="darkblue">Sky</font>]] [[User talk:PumpkinSky|<font color="darkorange">talk</font>]]
need to be replaced with [[User:PumpkinSky|<span style="color: darkorange;">Pumpkin</span><span style="color: darkblue;">Sky</span>]] [[User talk:PumpkinSky|<span style="color: darkorange;">talk</span>]]
ಮಲ್ನಾಡಾಚ್ ಕೊಂಕ್ಣೊ ( talk) 16:47, 11 May 2021 (UTC)
Extended content
|
---|
str = str.replace(/<font colou*r *= *["']* *([#a-z\d ]+)["']* *>([a-z\d_— \'&;:!°\.#ταλκ\(\)\-\?\.,\!ößáàăâåäãāảạæćČçĐéèêếềễěëėęēệíìîïİįīịıĽńñóòôỗöõøōơờọœřšŞúùûüũūưứýỳ¡§:\@\!\?\&⊕⊗会話投稿記録日本穣投稿րևանցիԵ]+)<\/font>/gi, '<span style="color:$1;">$2<\/span>'); str = str.replace(/<font style="colou*r:["']* *([#a-z\d ]+)["']* *>([a-z\d_— \'&;:!°\.#ταλκ\(\)\-\?\.,\!ößáàăâåäãāảạæćČçĐéèêếềễěëėęēệíìîïİįīịıĽńñóòôỗöõøōơờọœřšŞúùûüũūưứýỳ¡§:\@\!\?\&⊕⊗会話投稿記録日本穣投稿]+)<\/font>/gi, '<span style="color:$1;">$2<\/span>'); str = str.replace(/<font colou*r *= *["']* *([#a-z\d ]+)["']* size="*([\dpxem\. ]+)"* *>([a-z\d_— \'&;:!°\.#ταλκ\(\)\-\?\.,\!ößáàăâåäãāảạæćČçĐéèêếềễěëėęēệíìîïİįīịıĽńñóòôỗöõøōơờọœřšŞúùûüũūưứýỳ¡§:\@\!\?\&⊕⊗会話投稿記録日本穣投稿]+)<\/font>/gi, '<span style="color:$1; size:$2;">$3<\/span>'); str = str.replace(/<font face *= *"* *([a-z ]+)"* *>([a-z\d_— \'&;:!°\.#ταλκ\(\)\-\?\.,\!ößáàăâåäãāảạæćČçĐéèêếềễěëėęēệíìîïİįīịıĽńñóòôỗöõøōơờọœřšŞúùûüũūưứýỳ¡§:\@\!\?\&⊕⊗会話投稿記録日本穣投稿]+)<\/font>/gi, '<span style="font-family:\'$1\';">$2<\/span>'); str = str.replace(/<font colou*r *= *["']* *([#a-z\d ]+)["']* face= *"* *([a-z ]+)"* *>([a-z\d_— \'&;:!°\.#ταλκ\(\)\-\?\.,\!ößáàăâåäãāảạæćČçĐéèêếềễěëėęēệíìîïİįīịıĽńñóòôỗöõøōơờọœřšŞúùûüũūưứýỳ¡§:\@\!\?\&⊕⊗会話投稿記録日本穣投稿]+)<\/font>/gi, '<span style="color:$1; font-family:\'$2\';">$3<\/span>'); str = str.replace(/<font face= *"* *([a-z ]+)"* colou*r *= *["']* *([#a-z\d ]+)["']* *>([a-z\d_— \'&;:!°\.#ταλκ\(\)\-\?\.,\!ößáàăâåäãāảạæćČçĐéèêếềễěëėęēệíìîïİįīịıĽńñóòôỗöõøōơờọœřšŞúùûüũūưứýỳ¡§:\@\!\?\&⊕⊗会話投稿記録日本穣投稿]+)<\/font>/gi, '<span style="font-family:\'$1\'; color:$2;">$3<\/span>'); str = str.replace(/<font style *= *"color:([#a-z\d ]+);" *>([a-z\d_— \'&;:!°\.#ταλκ\(\)\-\?\.,\!ößáàăâåäãāảạæćČçĐéèêếềễěëėęēệíìîïİįīịıĽńñóòôỗöõøōơờọœřšŞúùûüũūưứýỳ¡§:\@\!\?\&⊕⊗会話投稿記録日本穣投稿]+)<\/font>/gi, '<span style="color:$1;">$2<\/span>'); str = str.replace(/<font style *= *"([:#a-z\d ;\.\-]+)" *>([a-z\d_— \'’&;:!°\.#ταλκ\(\)\-\?\.,\!ößáàăâåäãāảạæćČçĐéèêếềễěëėęēệíìîïİįīịıĽńñóòôỗöõøōơờọœřšŞúùûüũūưứýỳ¡§:\@\!\?\&⊕⊗会話投稿記録日本穣投稿]+)<\/font>/gi, '<span style="$1">$2<\/span>'); str = str.replace(/(\[\[User:[a-z\d_— \'&;:!°\.#ταλκ\(\)\-\?\.,\!ößáàăâåäãāảạæćČçĐéèêếềễěëėęēệíìîïİįīịıĽńñóòôỗöõøōơờọœřšŞúùûüũūưứýỳ¡§:\!\?\&⊕⊗会話投稿記録日本穣投稿]+\|)<font colou*r *= *["']* *([#a-z\d ]+)["']*>([a-z\d_— \'&;:!°\.#\(\)\-\?ößáàăâåäãāảạæćČçĐéèêếềễěëėęēệíìîïİįīịıĽńñóòôỗöõøōơờọœřšŞúùûüũūưứýỳ¡§:\!\?\&⊕⊗会話投稿記録]+)<\/font> *(\]\])/gi, '$1<span style="color:$2;">$3<\/span>$4'); str = str.replace(/(\[\[User[ _]talk:[a-z\d_— \'&;:!°\.#ταλκ\(\)\-\?\.,\!ößáàăâåäãāảạæćČçĐéèêếềễěëėęēệíìîïİįīịıĽńñóòôỗöõøōơờọœřšŞúùûüũūưứýỳ¡§:\!\?\&⊕⊗会話投稿記録日本穣投稿]+\|)<font colou*r *= *["']* *([#a-z\d ]+)["']*>([a-z\d_— \'&;:!°\.#ταλκ\(\)\-\?ößáàăâåäãāảạæćČçĐéèêếềễěëėęēệíìîïİįīịıĽńñóòôỗöõøōơờọœřšŞúùûüũūưứýỳ¡§:\!\?⊕⊗会話投稿記録]+)<\/font> *(\]\])/gi, '$1<span style="color:$2;">$3<\/span>$4'); str = str.replace(/(\[\[Special:Contributions\/[a-z\d_— \'&;:!°\.#ταλκ\(\)\-\?\.,\!ößáàăâåäãāảạæćČçĐéèêếềễěëėęēệíìîïİįīịıĽńñóòôỗöõøōơờọœřšŞúùûüũūưứýỳ¡§:\!\?\&⊕⊗会話投稿記録日本穣投稿]+\|)<font colou*r *= *["']* *([#a-z\d ]+)["']*>([a-z\d_— \'&;:!°\.#ταλκ\(\)\-\?\.,\!ößáàăâåäãāảạæćČçĐéèêếềễěëėęēệíìîïİįīịıĽńñóòôỗöõøōơờọœřšŞúùûüũūưứýỳ¡§:\!\?\&⊕⊗会話投稿記録日本穣投稿]+)<\/font> *(\]\])/gi, '$1<span style="color:$2;">$3<\/span>$4'); |
Thank you! -- ExperiencedArticleFixer ( talk) 16:41, 3 June 2021 (UTC)
Hey, was wondering if anyone here could help me extract the data from the templates listed at Template:Subatomic particle/symbol and Template:Subatomic particle/link and post in one of my user pages? The data in each template is a single line so there is nothing special here. -- Gonnym ( talk) 17:09, 13 July 2021 (UTC)
See PINOFF, my earlier request. The bot would look for pages with large numbers of templates like {{ Citation needed}} and add them to a list in its own userspace. It would never edit pages outside its own userspace; users who wanted to view the output would just transclude the page via template. Does this sound like something that needs a bot? 'Ridge( Converse, Create, & Fascinate) 15:17, 22 June 2021 (UTC)
I am requesting for a bot to go through userpages so that a list is created of pages where its users have made edits only to their pages.
The intent is to tag these user pages as {{ Db-notwebhost}}. Catchpoke ( talk) 00:46, 22 April 2021 (UTC)
Some of the newly created pages are orphaned pages, but they have not been marked with orphaned templates, is tagbot that can do this?-- q28 ( talk) 08:44, 9 June 2021 (UTC)
There are 298 list articles in Category:Lists of National Football League draftees by college football team. Of these, approximately 121 articles contain a table of records which are not in correct chronological order (they contain newer "2021" rows on top of older "19XX" rows).
Currently, all 121 articles in need of one-time cleanup contain sections with {{Chronological|sect=yes|date=June 2021}}
. So that is potentially a hook to key off of.
There are two cases where automated cleanup should update the table to cause render in chronological order (oldest YYYY rows first, newest YYYY rows last, AND preserve the existing top-bottom sequence of rows within a given year's draft by Round/Pick/Overall ).
Note, there might be a few per-article variations which do not use a section name of #Selections
Here are the other cases to consider, where no bot modification is desired:
If this can be automated, I am happy to manually inspect all 298 articles and fix/revert any missed edge cases to stable where necessary. Updating these manually would be very slow and prone to error. The scope is unlikely to ever be done, even with WP:NFL project participation. So any automation would be an enormous time-saver and win. Cheers, UW Dawgs ( talk) 23:41, 9 June 2021 (UTC)
Done @ UW Dawgs: This bot couldn't help with List of Florida State Seminoles in the NFL Draft. William Avery ( talk) 12:32, 22 July 2021 (UTC)
As an AfC reviewer I come across many draft articles with a disproportional ratio of references to the prose text. A healthy number of such draft articles are on subjects that ultimately turn out to be notable, i.e. authored by newcomer editors with good intentions who are simply oblivious of WP:OVERCITE.
By developing and launching a bot that would show a warning notice for editors trying to submit a draft article triggering WP:OVERCITE filters, we would:
Suggested filters that would trigger this notice could vary and be based on a community consensus. Examples of filters:
Example text for the notice: "It seems like your draft is using too many references. Please keep in mind that draft articles are not accepted based on the number of references provided. To the contrary, citation overuse can delay review or even be a reason for a decline. Please see WP:OVERCITE and consider editing your draft accordingly." nearlyevil 665 06:24, 12 June 2021 (UTC)
is an essaythat
contains the advice or opinions of one or more Wikipedia contributors. This page is not an encyclopedia article, nor is it one of Wikipedia's policies or guidelines, as it has not been thoroughly vetted by the community. Some essays represent widespread norms; others only represent minority viewpoints.
References
There are many list type articles, or articles containing a bibliography, or scientific details, where a large number of citation is essentia, and it is the accepted practice in medical articles to give what would probably be considered an excessive number in any other field. Any notice that might lead to people removing such references would be giving exactly the wrong advice. There are however several real problems, but I do not see how they are capable of easy solution by bot.
I do not think articles are often declined on this reason alone; and if they are, it is incorrect, and should be brought to attention of the deleting editor or if necessary at Deletion Review. Rather, the inclusion of excess referencing is often a sign of promotional editing, or editing by a fan. It is bad style, and while it is neve correct to delete for style alone, bad style often indicates problems, and will certainly cause an article to be looked at carefully--perhaps even hyper-carefully. It's not currently concentrated on women; rather, a few years ago some of those running editathons and projects on undercovered areas were somewhat careless of ensuring that the articles written were of more than borderline notability . This did encourage the tendency of some editors with traces of misogeny to be over-critical in this area. But those running such projects have learned, and so have most of the misogynists. DGG ( talk ) 06:48, 16 June 2021 (UTC)
I'm not really sure what bot is involved with this but we have had ongoing problems with Category:AfC G13 eligible soon submissions. When things are running normally, it holds between 4,000 -5,000+ draft articles that are between 5 and 6 months since their last edit. When they hit 6 months without a human edit, they are deleted per CSD G13. Also, reviewers from AFC (particularly DGG) go through this category and "rescue" promising drafts and postpone their deletion and sometimes even move good drafts into main space.
What has happened this past year is that this category starts going down to 3,000 drafts, 2,000 drafts, 1,000 drafts and now it is only holding 478 expiring drafts. When this has happened in the past, I have asked ProcrastinatingReader for help and he has been able to do some technical task that causes the category to, over a few days, fill back up again. Right now though, he can't get to this task and advised me to come here and ask for help.
I have little information to offer beyond a description of the problem. I have no idea what bot or template categorizes these drafts or what ProcrastinatingReader did to fix this problem. I know that having categories filled has been an ongoing problem because I have brought the issue to the Village Pump and other individuals several times over the past few years. So, I'm not sure what the fix would be. If you could find a permanent solution, that would be awesome. Thank you. Liz Read! Talk! 02:20, 16 June 2021 (UTC)
Sounds like a task for User:Joe's Null Bot. According to toolforge:sge-jobs/tool/nullbot it's still operational, despite the warning on its page - FASTILY 22:16, 7 July 2021 (UTC)
I created a custom task to process this in a more simple manner. No onwiki list to manage (like User:ProcBot/PurgeList2) but this also means it will actually complete its runs. It'll run once a week. Current category count is just over 3k. It purges those three cats listed on User:ProcBot/PurgeList2, which I presume together contain all AfC active drafts. It will update the G13-expiring-soon for pages in those three categories only. I still advise moving to a more sophisticated DB-generated system, such as SD's list. ProcrastinatingReader ( talk) 11:26, 9 July 2021 (UTC)
Pls add my name bots user. Hind ji ( talk) 06:19, 29 July 2021 (UTC)
I regularly come across The New York Times in articles written only as "the New York Times". Would it be possible to have a bot find all instances of "New York Times", and ensure that the "the" before the instance is not only capitalized, but also italicized as part of the proper name of the publication? This could then also be repeated for other publications that the "The" is part of the proper name (The Boston Globe, The Herald Journal, The News Courier, The Plain Dealer, etc.) after/later? - Adolphus79 ( talk) 08:01, 22 July 2021 (UTC)
According to the New York Times article "Ibsen or Shakespeare?" (March 18, 1928), Harrison Grey Fiske was 12 years old when he first set eyes on the future Mrs. Fiske—she was but eight, performing in a Shakespearean role.
("a The NYT article") or ... ("the The NYT article"). – Jonesey95 ( talk) 21:20, 22 July 2021 (UTC)
leading article may be dropped when the title is used as a modifier: According to a New York Times article by .... Certes ( talk) 00:43, 23 July 2021 (UTC)
I'm not going to disagree with concensus, nor ask the bot(master)(s) for an impossible task. It was just an idea I had at 4AM... lol - Adolphus79 ( talk) 02:34, 23 July 2021 (UTC)
Hey, Bot folks,
I accidentally happened upon a user talk page where the user page had been deleted almost exactly four years ago (see User talk:Bmoy94/sandbox/Innova Market Insights) but the user talk page was not deleted. This was a surprise to me because we have database reports for orphaned talk pages (see Wikipedia:Database reports/Orphaned talk pages and Wikipedia:Database reports/Orphaned file talk pages). Are User Talk pages exempted from these reports?
Obviously, this is not an urgent problem but it would be useful if there was a bot report for orphaned User Talk pages as well. Right now, many deletions are done with Twinkle which will delete redirects but not the redirect talk pages. If it is a regular article redirect talk page, it can show up on the orphaned talk page report or the broken redirects report but not all talk pages of redirects are also redirects and some redirects are from User pages. I don't think there are hundreds of these pages out there but it would be useful if, like the orphaned file talk page report, this could become a weekly report that is done. Thank you for considering this request. Liz Read! Talk! 22:46, 24 June 2021 (UTC)
COUNT(*)
with the page titles.
GoingBatty (
talk) 05:02, 25 June 2021 (UTC)
I thought it would be "easy" to download the current list of titles (enwiki-20210620-all-titles.gz from here) and do some clever stuff to find orphaned talk pages. Problem: my first run found 16,228,343 orphaned pages! Some superficial checking showed that most of those were due to things like Talk:Example/Archive_1 or .../GA1 or .../FA1 or .../Test or .../OtherStuff. When I finally found a few dozen orphaned talk pages, they were tagged as "keep" (in Category:Wikipedia orphaned talk pages that should not be speedily deleted), examples Talk:Qazwsxedcrfvtgbyhnujmikolp, WT:AOTM, Category talk:Films about hebephilia, Draft talk:Anirban Sengupta. Then there are weird redirects like WT:MOS:VG. I finally found a junk page: TimedText talk:Constitution.ogg (and probably a few more). I'm posting this to let anyone wanting to take the job on that quite a lot of pruning of results would be required. Johnuniq ( talk) 11:16, 25 June 2021 (UTC)
We recently had a discussion at Template talk:Infobox person#Deprecating the net worth parameter? and it was decided to remove the parameter. Would it be possible to get a list of all the infoboxes that use the parameter. Also could it be go back to July 11 rather than the current date since we have someone who is removing all the parameters as we speak? Thanks! Patapsco913 ( talk) 16:00, 18 July 2021 (UTC)
Please remove these pages (that contain {{ pec}} and the related templates and use the template {{ Category class}}) per the discussion at Template talk:Category class. Qwerfjkl talk 21:41, 13 August 2021 (UTC)
Hello. I'm a marine biologist specialized in Echinodermata. I would like to be informed of any new picture of these animals so I can review the identification and, when useful, add them to the relevant Wikipedia articles. But as there are over 7000 species of them, of course I can't check all the categories every day. I used to benefit from Ogrebot's newsfeed for a long and useful time but it is no longer working. Do you guys know any other way I could get such uploading newsfeed ? Thanks and best regards, FredD ( talk) 14:23, 8 June 2021 (UTC)
Hi all, the WP:Featured and good topic candidates promotion/demotion/addition process is extremely tedious to do by hand, and having a bot help out (akin to the FAC and FLC bot) would do wonders. Unfortunately, this would have to be a rather intricate bot—see User:Aza24/FTC/Promote Instructions for an idea of the promotion process—so I don't know if many would be willing to take it up. But regardless, such a bot is long over due, and its absence has resulted in myself, Sturmvogel 66 and GamerPro64 occasionally delaying the promotion process, simply because of the discouraging and time consuming manual input needed. I can certainly provide further information on the processes were someone to be interested. Aza24 ( talk) 01:14, 4 May 2021 (UTC)
Responding in order:
Hi there, I've been occasionally trying to chip away at the articles in the Category:Infoboxes with unknown parameters. Some of these categories are... beefy to say the least, some of the standouts are Film with 11.2k, German location with 9.8k, Officeholder with 3.5k, Organisation with 3k, Scientist with 7.4k, Settlement with 10.5k and Sportsperson with 3.7k. Currently, the only way to really tell which parameter is causing the issue is to attempt to edit the article and either check the infobox parameters or save the preview so it appears at the top of the page. You are at least given an idea of what to look for by the sortkey it appears under in the category, but I don't think this works when there are multiple errors properly, which results in having to consult the preview regardless. I'm not certain but I feel like a lot of these ones especially are either deprecated parameters that PrimeBOT might be able to handle or simple misspellings or other issues that could be handled with AWB or other similar tools, such as missing underscores and dashes and alt test and image sizes being separated with a pipe.
Since this requires information to be grabbed, it seemed like a bit more than an SQL query would be necessary, I was thinking of some sort of bot that could generate a report maybe that could be linked to from the category page. I'm not thinking anything too complicated (in my uneducated opinion, I think), just something that lists the page in the left column and the broken parameter in the right, you could sort both columns (so by article title or broken parameter), and this would make it much easier to visibly see where there is a great amount of overlap in broken parameters to more speedily clear out these categories.
I hope that makes sense, but it would hopefully assist the relevant WikiProjects in being able to clean up their respective articles as well, and potentially allow for these parameters to be either added in as aliases or if there is significant usage within a template, maybe even have an underused parameter modified to call an already existing name used in the majority. Thank you if anyone is willing or able to help out! -- Lightlowemon ( talk) 12:22, 28 June 2021 (UTC)
All German state broadcasters have to follow a 2009 law that they need to delete all online content after a year, so as not to "disadvantage" commercial news corporations. ("12.
Rundfunkänderungsstaatsvertrag" [
de], 1 June 2009)
This has big consequences for Wikipedia when they cite news from German state broadcasters:
It means legally mandated automatic link rot for such sources.
I suggest a bot that recognizes when
such a broadcaster is cited and automatically requests a save point from the Internet Archive, then links the save point in the ref.
Also see the Depublication [ de]: The whole German article is about this novel concept brought up the 2009 law.-- ΟΥΤΙΣ ( talk) 00:26, 27 June 2021 (UTC)
Pinging User:Marchjuly. We had a short discussion at Wikipedia talk:Files for discussion#Notifying uploaders about a bot that could leave FfD notices on the talk pages of all articles that use the nominated image. I believe there is already a bot like this for when Commons files are nominated for deletion (which bot is this, by the way?), and having one for local files would be beneficial for all the same reasons (more participation at FfD, having a record in the article talk history, and general notification and discussion transparency purposes). — Goszei ( talk) 23:31, 2 July 2021 (UTC)
I believe there is already a bot like this for when Commons files are nominated for deletion (which bot is this, by the way?)That's Community Tech Bot [14] – SD0001 ( talk) 15:47, 5 July 2021 (UTC)
{{
FFDC}}
is non-trivial from a programming perspective. Media files can be displayed/embedded in a variety of ways (e.g. infoboxes, galleries, thumbnails, other templates I'm not thinking of) and adding {{
FFDC}}
as a caption in the correct location for each of these scenarios could be extremely tricky for a bot. To be clear, I don't think this would be a bad thing to have, but I do believe the reward to effort ratio is very low. OTOH, removing {{
FFDC}}
when FfD's are closed is much more straightforward. If there's consensus for it, I can build it. -
FASTILY 21:54, 7 July 2021 (UTC)
|caption=
or do some other tweak to the file's syntax to add the ffdc template. I also have noticed that ffdc template are sometimes removed by editors who don't think the file should be deleted; they seem to misunderstand the purpose of the template and mistake it for a speedy deletion template of sort. I never really considered any possible article talk page spamming effect about might have, but that does seem like valid point now that it's been made. I can see how not only new editors, but even long-term but not very experienced editors (i.e. long-term SPAs) might be find the templates "shocking" in some way. Maybe adding them to a WikiProject talk page would be better as well since the editors of a WikiProject are less likely to be shocked by such template. Even better might be to figure out a way to incorporate
WP:DELSORT into FFD discussion since many WikiProjects already have "alert pages" where their members can find out about things like PRODs, SPEEDYS and XFDs.There's always going to be people unhappy when a file is deleted; so, there's no way around that. Many times, though, these people claim they weren't properly notified at all or not in enough time to do something about it, and there might be some way to help mitigate that. I'm also a little concerned about comments such as
this where some editors nominating files for FFD might be relying too heavily on bots for things like notifications. For example, the wording of {{
FFD}} states as follows: Please consider notifying the uploader by placing {{subst:ffd notice|1=Ffd}}
on their talk page(s)
. That, however, seems a bit inconsistent with the instructions given in the "Give due notice" section at the top of the main FFD page and this might be causing some confusion. I don't use scripts, etc. whenever I start an FFD and do everything manually; this is a bit more time intensive perhaps, but I think it also might lead to less mistakes because you have to check off a mental list of items before the nomination is complete. Those who do rely on scripts or bots to do this type of thing though, might set the bot up to do only the bare minimum that is required; they're not wrong per se, but the automation might cause some things to be left out that probably shouldn't be left out. So, before any new bot is created and starts doing stuff, it might be better to figure out exactly what a human editor should be required to do first. --
Marchjuly (
talk) 22:44, 7 July 2021 (UTC)The bot action would be to check the 'what links here' page of articles that have been deleted by WP:AfD (and are still deleted) and report/list any with links to main space articles. And provide/update a list to the project Wikipedia:WikiProject Red Link Recovery
There should not be any redlinks to articles that have been deleted by the AfD process, C.1. "If the article can be fixed through normal editing, then it is not a candidate for AfD."
People would go through the list and make decisions about how to fix, maybe it should a redirect, maybe the redlinks need to be unlinked, maybe something else...
This is discussed on the project page A bot seems like the best solution.
Because there are several avenues that these might get addressed, it seems like the best solution would something that updates regularly, so corrected subjects fall of the list, and new subjects get added.
Jeepday ( talk) 15:37, 22 June 2021 (UTC)
I frequently come across pages that have dozens of sections, many of them from years ago, and many of them being bot-posted "External links modified" sections. I think very long talk pages, especially when most the content is not very relevant, makes them less usable. Most new users won't know how to setup bot archiving. Would it be reasonable for a bot to automatically setup archiving on pages that meet a certain criteria (length/age related), using a standard configuration with the default age depending on how active the talk page tends to be? ProcrastinatingReader ( talk) 17:16, 11 July 2021 (UTC)
The category page Category:Characters adapted into the Marvel Cinematic Universe has been correctly proposed for speedy deletion as it was previously deleted as a result of a prior discussion.
Can I safely delete that page assuming that a bot will notice and go through all 400+ pages that include this category, and remove it? Or is there an existing bot for which I need to queue up a request? ~ Anachronist ( talk) 20:58, 10 September 2021 (UTC)
I want to create a bot for welcoming new users. King Molala ( talk) 08:41, 8 September 2021 (UTC)
This is a pretty minor task, but throwing it out here, as it'd be very doable via bot. There are many ISBNs on Wikipedia that lack proper hyphenation. https://www.isbn.org/ISBN_converter always knows how to fix them, but it'd be nice to have a bot on-wiki do it instead. Whether or not we'd also want to switch to using a non-breaking hyphen at the same time is something to consider, given that it doesn't seem we'll be able to use a no-wrap to prevent unwanted line breaks. Alternatively, if this is too cosmetic, we could find a way to add it to the WP:GENFIX set. {{u| Sdkb}} talk 21:41, 1 July 2021 (UTC)
In 2019, the ~6,000 articles on bilateral relations were given short descriptions in the format of "Diplomatic relations between the French Republic and the Islamic Republic of Pakistan", with full country names, by Task 4 of User:DannyS712's User:DannyS712 bot ( BRFA here). These are way over the 40-character instruction in WP:SDFORMAT, for little utility in information conveyed. I propose that another task be run where the SD's are all removed, so that an automatic short description like "Bilateral relations" can be added to Template:Infobox bilateral relations. — Goszei ( talk) 23:45, 5 July 2021 (UTC)
Diplomatic relations between the French Republic and the Republic of Iraq(73 characters) at France–Iraq relations is definitely way too wordy. I'm not super keen on "bilateral relations", as many people don't know what "bilateral" means, and there's no indication that what we're talking about is diplomatic relations, not some other type, which is the only way I could really see these titles needing any clarification. Something like
Bilateral diplomatic relations(30 characters) might be good. {{u| Sdkb}} talk 23:23, 12 July 2021 (UTC)
Done Just noting that PrimeBOT took care of this task circa July 24. Thanks to Primefac. — Goszei ( talk) 01:30, 21 August 2021 (UTC)
I've removed this thread, pending review from the Oversight Team. Somewhere like BOTREQ isn't the best place for "here's a huge privacy issue". At this exact point in time I will be suppressing it, but I am also putting this up for review by the OS team, and we will determine whether this thread is acceptable to keep the discussion going, if it should just live in the history, or if it should stay suppressed. Primefac ( talk) 18:06, 25 August 2021 (UTC)
I want to create a bot for welcoming new users. — Preceding unsigned comment added by Tajwar.thesuperman ( talk • contribs) 18:50, 29 August 2021 (UTC)
I pretty frequently come across instances where there are too many line breaks at the end of a section in an article, creating an extra space. This seems like something a bot could pick up fairly easily, although I'm sure there are some exceptions/edge cases that'd throw it off if we're not careful. Would anyone be interested in putting together a bot to address these? I'm sure there are thousands and thousands of them. {{u| Sdkb}} talk 21:35, 13 July 2021 (UTC)
wikicode = wikicode.replace(/\n{3,}/gm, "\n\n");
. If interested, give me a ping and I can make a custom user script that does this. –
Novem Linguae (
talk) 10:30, 14 July 2021 (UTC)The new WP:RPP permanent archive has a missing page for requests filed in October 2013: Wikipedia:Requests for page protection/Archive/2013/10. – LaundryPizza03 ( d c̄) 18:21, 17 July 2021 (UTC)
When a draft is deleted images uploaded to Commons are not always checked and might be left to languish. Even if the images are acceptable they may be uncategorized.
To help with this I would request to have a bot automatically create a list of images in rejected (or deleted, if possible) drafts, with the following conditions:
Optional features:
I am bringing this here after comments in Wikipedia:Village pump (proposals)#Automatic lists of images in rejected or deleted drafts MKFI ( talk) 19:54, 16 June 2021 (UTC)
This message is sent on behalf of the WikiProject Guild of Copy Editors coordinators: Dhtwiki, Miniapolis, Tenryuu, and myself. We screwed up. The Guild of Copy Editors (GOCE) sent out a mass message to our mailing list before it was ready. It has too many errors to fix, so we would like it completely removed. We will edit the message and resend it at a later date.
If there is a friendly bot operator who can revert as many of these mass message additions as possible, we would be grateful. – Jonesey95 ( talk) 20:38, 17 September 2021 (UTC)
When (almost) all of the individual articles listed at
List of settlements in Central Province (Sri Lanka) were created in 2011 (around 1,500 articles), they used the same layout, linking to the Sri Lankan Department of Census and Statistics as an exteral link. The URL for the site has changed, so
https://www.statistics.gov.lk/home.asp
no longer works and should be replaced with
http://www.statistics.gov.lk/
on all articles.
I'm requesting a bot that can replace * [http://www.statistics.gov.lk/home.asp Department of Census and Statistics -Sri Lanka]
with *[http://www.statistics.gov.lk Department of Census and Statistics]
on each article. —
Melofors
T
C 21:42, 11 September 2021 (UTC)
In
Category:NA-Class France articles ( 24 ) there are many pages with hardcoded |class=na
in the {{
WikiProject France}} template. I would like the "na"/"NA" removed. This will allow redirects, templates, and such to file into the appropriate categories, and any oddballs can be sorted out individually after. --
awkwafaba (
📥) 01:35, 18 September 2021 (UTC)
Hello! I discovered that the Shoki Wiki is dead. We use quite a few links from that site for the translated text of Samguk sagi, as a source for Korean kings, like in Jinsa of Baekje. The links have a format of http://nihonshoki.wikidot.com/ss-23 where ss is samguk sagi and 23 is the number of the scroll. Some 60 pages link to various scrolls, and it would be just too much manual work to correct the links with the archive links. Anyone could do this with an archivebot? Thanks a lot. Xia talk to me 18:01, 22 August 2021 (UTC)
Template talk:Infobox film#Request for comments has been closed as consensus to reorder the fields like this:
Before | After |
---|---|
... |director= |producer= |writer= |screenplay= |story= |based_on= |starring= |narrator= |music= |cinematography= |editing= ... |
... |director= |writer= |screenplay= |story= |based_on= |producer= |starring= |narrator= |cinematography= |editing= |music= ... |
(See testcases for rendered examples.)
This means not just does the template need to be changed but in any article where a person notable enough to be linked appears in both |producer=
and any of |writer/screenplay/story/based_on=
, or in both |music=
and |cinematography=
or |editing=
, the linked and unliked occurrences will need to be swapped. So we need a bot to make changes like these:
Article | Before | After |
---|---|---|
Good Night, and Good Luck | | producer = [[Grant Heslov]] | writer = {{unbulleted list|George Clooney|Grant Heslov}} |
| writer = {{unbulleted list|George Clooney|[[Grant Heslov]]}} | producer = Grant Heslov |
The Usual Suspects | | music = [[John Ottman]] | cinematography = [[Newton Thomas Sigel]] | editing = John Ottman |
| cinematography = [[Newton Thomas Sigel]] | editing = [[John Ottman]] | music = John Ottman |
CirrusSearch's regex engine doesn't seem to support back references to capturing groups, so I don't know how many articles need fixing. I don't think we need to simply reorder the parameters in articles that don't require moving links, as that would be purely cosmetic, though I could see an argument either way. The link might not always be a plain wikilink but could be {{ ill}} etc. Also some articles must have refs or footnotes in relevant arguments, which could be a nuisance in figuring out what needs to be done.
To minimize disruption, I plan not to implement the changes to the template until a bot is ready to take on this task. Nardog ( talk) 22:09, 7 July 2021 (UTC)
order of input does not affect order of outputOf course. I was just using the parameters rather than the labels for the sake of explanation. (But as I said above, an argument could be made that reordering the input even if it doesn't change the output would prevent editors from accidentally linking the wrong occurrences of names and thus producing more instances of what we're trying to fix. I guess the sheer number of the transclusions makes a compelling case against making such cosmetic changes, though.)
|producer=
to |writer=
etc. (none involved |music=
). So, though the samples may not be completely representative, we can expect ~6%, or ~8,776, of the articles with the infobox would require moving a link. I can set up a tracking category, or is it better for the bot to go through them one by one?
Nardog (
talk) 11:19, 11 July 2021 (UTC)
|producer(s)=
or |music=
and see if the same phrases appear in |writer=
etc. Admittedly I imagine it'll turn up a lot of false positives, but I haven't got a better idea.
Nardog (
talk) 13:29, 11 July 2021 (UTC)
@ Primefac: So I just went ahead and semi-manually cleaned up the category (I hope it didn't upset your workflow ;)). Excluding existing DUPLINKs from the detection brought down the number from ~2,500 to ~1,500, which made it more manageable.
PrimeBOT's operation last month left hundreds of errors like [[[[Mexico]] City]]
, West [[German language|German]]y
, <ref name="[[Variety (magazine)|Variety]]">
, |name=[[Bruce Lee]]: A Warrior's Journey
, which I fixed as far as I could find. FWIW I just left comments, refs, and non-personal parameters (|name=
, |caption=
, |distributor=
, |released=
, etc.) alone, which was enough to avoid most of these.
Nardog (
talk) 10:42, 19 August 2021 (UTC)
As a former volunteer / current user, I periodically come across articles with years old {{refimprove}}
templates. (e.g.
example). I just remove them if it's obvious reference have been added since the tagging. Seems like a bot could do that, based on whatever criteria the community agrees on.
NE Ent 12:39, 5 July 2021 (UTC)
{{
refimprove}}
(which now redirects to {{
more citations needed}}
)?
GoingBatty (
talk) 14:53, 5 July 2021 (UTC)
If there's no interest in auto removing the tag, perhaps a bot could post a notice on the original posters talk page asking them to review the page? NE Ent 11:40, 14 July 2021 (UTC)
Hello! I have noticed that most of the articles in /info/en/?search=Category:National_Register_of_Historic_Places_in_Virginia link to an old page on the Virginia Department of Historic Resources website ( http://www.dhr.virginia.gov/registers/register_counties_cities.htm) that isn't actually informative (and if it was useful at one point, the archivebot doesn't have it).
Ideally, these links would point directly to the listing's page on the Virginia Landmarks Register website. These pages conveniently use the VLR number in the URL. (For example, for the listing https://www.dhr.virginia.gov/historic-registers/014-0041/, "014-0041" is the VLR number.) The vast majority of these pages also have a NRHP Infobox, which usually includes the VLR number as "designated_other1_number =".
Is there a way for a bot/script to crawl instances of the URL: " http://www.dhr.virginia.gov/registers/register_counties_cities.htm" and change it to " https://www.dhr.virginia.gov/historic-registers/{value of "designated_other1_number" in the page's Infobox}/"?
I've been doing this manually and I just realized that A) there are thousands of these and it's going to take me forever, and B) a robot could probably do this.
Thanks! Niftysquirrel ( talk) 14:27, 5 August 2021 (UTC)
I propose a bot to automatically or semi-automatically parse the various "Sister project" templates across all of the different WMF projects, and synchronize their parameters with Wikidata.
Examples of these templates:
In its most basic form, I think this bot could just parse Wikitext for templates that imply a direct equivalency between two sister project pages, and then add those to their Wikidata entries, with human approval, if they're not already documented. I think this behaviour should be fairly uncontroversial.
In more advanced forms, I think complete two-way synchronization and some degree of semantic behaviour could potentially be useful too. For example, a template on a Wikisource page could be used to automatically populate its Wikidata entry, and that information could then in turn be used to automatically add Template:Wikisource or similar to Wikipedia pages that don't already have it. And to take things even further, you could also E.G. treat links to Wikisource author pages and Wikisource works differently, possibly to the extent of automatically adding "instance of: writer" to Wikidata entries if they're treated as authors in the Wikitext but not currently subclassed as them on Wikidata.
These more advanced forms may require further discussion and consensus. Depending on accuracy, it might be worth keeping a human in the loop for all wiki content changes.
On technical terms, I suggest that the model for parsing these templates into structured data relationships (and the model for vice versa) be kept separate from the code that then applies edits based on those relationships.
Intralexical ( talk) 17:03, 7 August 2021 (UTC)
This is a combination of bot request and request for guidance/assistance on categories/lists/templates.
I created and maintain List of legally mononymous people; see Mononymous person & WP:MONONYM.
I believe it's gotten a bit unwieldy and underinclusive — and that it perpetuates common misconceptions about "legal names". I had initially named it as a contrast to the pre-existing "stage names" and "pseudonyms" lists, which I now believe was a mistake.
I would like to merge it with List of one-word stage names and the mononyms in List of pseudonyms, include the many other mononymous people (e.g. Burmese, royal, pre-modern, etc).
I believe it needs to be converted into something that tracks bio pages more flexibly and automatically, based on a category or template in the bio page itself, rather than through a manual list. I don't know how to do that, which is why I'm requesting help.
I would like the result to differentiate (e.g. by subcategorization or list filter/sort) name-related properties, e.g.:
Ideally, I would like the resulting page to include thumbnail-bio info, e.g. (if applicable):
That part isn't obligatory; e.g. it may not be feasible if the result is a category page.
I believe this means some combination of
I am not familiar with how WP handles such things, so another solution might well be better. Please take the technical parts above just as suggestions. I don't particularly care if e.g. the result is a category page vs a list.
My request is just that it should be automatic based on something on the article page, be easily filtered by type, and have a nicely usable result.
If the infobox lists a full name, title-excluding field with one name, then they're probably mononymous.
Pages with a single name in the WP:NCP#"X of Y" format will usually be mononyms, especially if royalty or pre-modern.
Pages with Template:Singaporean name with parameter 3 only (1 & 2 blank) should indicate a mononym.
Most pages with Template:Burmese name are mononyms, but one would need to check by deleting all honorifics ( wiktionary list) from the person's name. This should use the Burmese text, since ဦး is a title but ဥ is a name, and both are transliterated as "U"; e.g. in U Kyin U (not a mononym), the first U is a title, and the last is a name.
As I suggested above, a bot adding a mononym marker to these pages should do so in a way that's marked as bot-added / tentative. There will of course be false positives and false negatives as always. This is simply a suggestion for doing a bootstrapping pass and extracting the basic info.
I previously asked for help re Burmese names, but got no responses: 1, 2, 3, 4. Volteer1 recently suggested that bots might be an effective approach.
So… any suggestions for how this could best be achieved? Sai ¿? ✍ 17:15, 10 August 2021 (UTC)
When I find bare url's in infobox "website" fields, I always wrap it with the url template (example diff: Special:Diff/1039831301). I do this for two reasons: (1) the infobox width is often unnaturally expanded with long links because bare url's don't wrap, and (2) the template strips the display of http://, which looks better. I considered a possible fix in the code of the infoboxes themselves, but I believe that wouldn't work if two websites were added, or if there is already a url template/other template being used. I believe use of the url template in this field is already fairly widespread and common. — Goszei ( talk) 01:19, 21 August 2021 (UTC)
AFAIK the {{
url}}
template is not supported by the archive bots (please correct if wrong). Thus once the link dies - all links die - it will become link rot requiring manual fix. Which is fine, want to include the risks. No guarantee bots will ever support these templates, there are
thousands of specialized templates it is not feasible to program for each. The more we use them, the more link rot seeps in over time. I suppose if there are so many {{url}}
the sheer magnitude could put pressure on the bot authors to do something, but it would also require modifications to the Lua code probably, consensus on how it works. As it is, bare URLs are supported by most bots and tools. --
Green
C 15:19, 21 August 2021 (UTC)
There is a discussion over at WT:WPT looking for a bot to add/split some WPBS banner templates. I feel like such a bot already exists, so before I put in my own BRFA I figured I'd post here and hopefully find someone who remembers which bot that is (because at the moment I am coming up completely blank). Please reply there to avoid too much decentralisation. Thanks! Primefac ( talk) 14:58, 22 August 2021 (UTC)
Hi folks, not sure if this is a practical request, thought I'd ask anyway. The Science Fiction Encyclopedia has approximately 12,000 entries on people, most of whom are likely notable. As I discovered when writing Sarah LeFanu, at least some of them do not have Wikipedia articles. Is it practical for a bot to trawl through this list, check whether Wikipedia has an article on each entry, and save the entry to a list if it doesn't? Vanamonde ( Talk) 07:33, 23 August 2021 (UTC)
Hey all, hoping a bot (or an AWB master) could be deployed to help in the project space. Many, many WikiProject pages have code at the bottom that begins with:
[[tools:~dispenser/cgi-bin/transcluded_changes.py/ . . .
and of course that link is now bad (gives 404 error on toolserver); but since the pages use tools:
, this is not really a URL change request. An example use is at
WP:WikiProject Geelong#External watchlist.
All instances in the Wikipedia namespace that begin with the above string can be deleted, together with all other characters up until the closing ]]
; there is no retcon that will fix it. Thanks in advance,
UnitedStatesian (
talk) 18:35, 23 August 2021 (UTC)
Referencing this discussion: Wikipedia:Help desk#AiNews.com - Wrongly Indexed
It seems that ainews.com was formerly "Adult Industry News", a news site for the porn industry, which has a lot of citations. The domain now belongs to "Artifical Intelligence News". Needless to say, the new owner doesn't want its domain linked in porn-related articles.
Adult Industry News is now ainews.xxx.
Experimenting with some of the links from https://en.wikipedia.org/?target=*.ainews.com&title=Special%3ALinkSearch it seems that one cannot simply substitute .com with .xxx. The pages must be found on archive.org.
@ GreenC: I am not sure if InternetArchiveBot would handle this unless someone went through all ~130 links and tagged them with {{ dead link}}. I don't know of another bot that comes close. ~ Anachronist ( talk) 15:29, 27 August 2021 (UTC)
I found that some of the islands artical did not create corresponding archipelago variants. Robots can automatically identify and create them.-- q28 ( talk) 01:35, 27 July 2021 (UTC)
Would it be possible to have a bot add {{ reflist talk}} to talk page threads which have refs?(I am not watching this page, so please ping me if you want my attention.) SSSB ( talk) 08:39, 3 September 2021 (UTC)
A helpful message is shown after moving a page:
This section in a nutshell:
|
It seems like a lot of this could be be automated fairly straightforwardly.
It was pointed out on the Teahouse that Wikipedia:Double_redirects do get fixed by a bot, but the fair use rationales, navboxes, etc. also seem unnecessary to fix manually.
Intralexical ( talk) 13:03, 9 August 2021 (UTC)
hello i am currently running a Project, which counts User's Contributions on daily basis, The Project works on several wikis such as ( ckbwiki, SimpleWiki, ksWiki, ArWiki, jawiki) it works by checks User's Contributions and comparing it to previous Contribution, Ranking Top User's Accordingly, if a user is less active than before The Comparison will change to Red otherwise Green, it also Shows User Rights along with their contributions,
i need someone's Help, to make a bot Specially for the Project, cuz i am doing it Manually by myself and it takes so much time and energy, Can anyone help me to make a script for, i really appreciate it. —— 🌸 Sakura emad 💖 ( talk) 19:48, 1 October 2021 (UTC)
Hi Primefac and Sakura emad. Wikipedia:List of Wikipedians by number of edits/Configuration is the bot's source code for this report. -- MZMcBride ( talk) 22:36, 13 October 2021 (UTC)
This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 75 | ← | Archive 80 | Archive 81 | Archive 82 | Archive 83 | Archive 84 | Archive 85 |
WantedPages is pretty useless as it is since it considers links from and to talk pages. Does the requested action above help at all? JsfasdF252 ( talk) 03:06, 5 January 2021 (UTC)
Some older WP:FFD pages are titled WP:Files for deletion instead of WP:Files for discussion. Should a bot be created to move them to the newer title just like how WP:Votes for deletion pages were moved to WP:Articles for deletion for the sake of consistency? P,TO 19104 ( talk) ( contribs) 15:54, 23 January 2021 (UTC)
This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 75 | ← | Archive 80 | Archive 81 | Archive 82 | Archive 83 | Archive 84 | Archive 85 |
Anything with http://www.starwars.com should be changed to https. JediMasterMacaroni (Talk) 17:03, 25 February 2021 (UTC)
FANDOM to Fandom (website), please. JediMasterMacaroni (Talk) 00:57, 24 February 2021 (UTC)
There are some 800+ transclusions of Template:IPC profile. They go to an archive page, because the original link doesn't work, but with the first five I at random checked, the archive page doesn't work either: Scot Hollonbeck, Stephen Eaton, Jonas Jacobsson, Sirly Tiik, Konstantin Lisenkov.
It seems possible to replace the template with Template:IPC athlete: {{IPC profile|surname=Tretheway|givenname=Sean}} becomes {{IPC athlete|sean-tretheway}}. It is safer to take the parameter from the article title than from the IPC profile template though: at Jacob Ben-Arie, {{IPC profile|surname=Ben-Arie|givenname=<!--leave blank in this case, given name not listed-->}} should become {{IPC athlete|jacob-ben-arie}} [1].
If the replacement is too complicated, then simply removing the IPC profile one is also an option, as it makes no sense to keep templates around which produce no useful results. Fram ( talk) 11:34, 5 January 2021 (UTC)
I would love for a bot or script which could allow users to turn a page's citations which include a URL to Twitter into instances of {{ cite tweet}}. – MJL ‐Talk‐ ☖ 20:17, 3 February 2021 (UTC)
This bot is needed for fixing the grammatical errors. I noticed there were a no. Of grammatical errors in pages which was not attended by any users or administrators. — Preceding unsigned comment added by Kohcohf ( talk • contribs)
Trying to browse through and improve the drafts at Special:AllPages/Draft: is hard because there are so many redirects. Is it possible to create and maintain a category of drafts that are not redirects for easier navigation?
Edmonds community college is now known as edmonds college. I request a bot that changes all articles saying Edmonds Community College to fix the wikilink to change to Edmonds College. This is a college in Washington state, USA. 2601:601:9A7F:1D50:1D79:66DB:8571:CFA7 ( talk) 09:07, 2 April 2021 (UTC)
OK 2601:601:9A7F:1D50:1D79:66DB:8571:CFA7 ( talk) 09:15, 2 April 2021 (UTC)
Links die on the internet. I request a bot that checks if a citation reference to the Internet is mirrored on the Internet Archive to then rewrite the link to its link in the Archive as that won't suffer from Link Rot. This preserves references on Wikipedia which may vanish over time because of Link Rot. Big job! I know! Smart coder needed. 09:13, 2 April 2021 (UTC)09:13, 2 April 2021 (UTC)09:13, 2 April 2021 (UTC)09:13, 2 April 2021 (UTC)09:13, 2 April 2021 (UTC) 2601:601:9A7F:1D50:1D79:66DB:8571:CFA7 ( talk) --- This could be used to really fix the dead links backlog by hunting to see if the link was mirrored and then rewriting it to the archive to show the link. --- A second idea is to check if the link is mirrored and if so then to add language giving a backup link on the archive which says to go to the link on the Archive if/in case the first reference link has died from being removed from the Internet.
I want to request this bot called "EnergyBot". EditJuice ( talk) 16:16, 7 April 2021 (UTC)
The bot would make talk page archives. EditJuice ( talk) 16:19, 7 April 2021 (UTC)
No, I don't even have an archive. I request the bot for later, when I will have an archive already. EditJuice ( talk) 17:15, 7 April 2021 (UTC)
The site sentragoal.gr has been hijacked by a gambling site and we should be looking to deactivate active reference links to that source. If someone is able to manage that easily, that would be fantastic. — billinghurst sDrewth 23:55, 13 February 2021 (UTC)
|url-status=usurped
and other links should... have what happen to them? --
Izno (
talk) 00:44, 14 February 2021 (UTC)
Notice of this request has been posted at WT:LANG, and has received only positive comments (thanks or text).
I formatted an example by hand at Dâw language. There are a bit over 3000 URLs to link to. They provide demographic data and reliable sources for the languages, and are an alternative to Ethnologue, which is now behind a very expensive paywall. (And in some cases ELP is a check on Ethnologue, as the two sites often rely on different primary sources and often give very different numbers.)
Last time I did something like this it was handled by PotatoBot, but Anypodetos tells me that's no longer working.
Add links to the Endangered Languages Project (ELP) from our language articles through {{ Infobox language}}, parallel to the existing links to other online linguistic resources (ISO, Glottologue, AIATSIS, etc.)
The list of ELP language names and associated ISO codes and URLs is here. I would be happy if the entries in the table with single ISO codes were handled by bot. I can do the rest by hand, but see below.
There are three columns in the table. Two contain values for the bot to add to the infobox. The third is for navigation, an address for the bot to find the correct WP article to edit.
The bot should add params "ELP" and "ELPname" to the infobox, using the values in the columns 'ELP URL' and 'ELP name' in the data table.
The value in the column 'ISO code' is to verify that the bot is editing the correct WP article. The bot should follow the WP redirect for that ISO code and verify that the ISO code does indeed occur in the infobox on the target page.
For example, say one of the entries in the data table has the ISO code [abc]. The WP redirect for that code is ISO 639:abc. That should take the bot to a language article, and the bot should verify that the infobox on that article does indeed have a param ISO3 = abc or lc[n] = abc (where [n] is a digit).
If there isn't a match (and it's been years since we've run a maintenance bot to verify them all), then that ELP entry should be tagged as having a bad WP redirect for the ISO.
There is sometimes more than one ISO code per language infobox, because we don't have separate articles for every ISO code. (This is where the params lc[n] come in.) If the bot finds that there's already an ELP link in the box from a previous pass, then it should add the new codes as ELP[n] and ELPname[n], and keep a list so we can later code the template to support the article with the largest number [n] of links.
There is occasionally more than one infobox in a WP language article. It would probably be easiest if I did such cases by hand, since there are probably very few of them (if any), unless the bot can determine which infobox on the page contains the desired ISO code.
The bot should test that the external URL lands on an actual page. For instance, a language in the data table is listed as having URL 8117, but following 8117 gets the error message "Page not found :(". Such bad URLs should be tagged both for this project and for submission to the ELP.
If the programmer of the bot wishes to, it would be nice if they could do a run for the 40+ ELP entries that each have 2 ISO codes. (Or have three, if the coding is easy enough, but there are only 16 of those. Anything more than that I should probably do by hand.) If the rd's for those two ISO codes both link to the same Wikipedia article, then the ELP params should be added as above. If they link to different articles, they should be tagged and I'll do them by hand.
Please ping me if you respond. — kwami ( talk) 11:14, 16 January 2021 (UTC)
{{#invoke:WikidataIB |getValue |ps=1 |P2192 |qid=Q3042278}}
→ 2547{{#invoke:WikidataIB |getValue |ps=1 |P2192 |qid=Q3042278 |qual=P1810 |qo=y}}
→ DâwSorry, I didn't follow any of that. I don't see any of the data at Wikidata. E.g., I can't tell which are the 7 pages with multiple IDs, or how it was determined which page gets which ELP ID. — kwami ( talk) 08:13, 28 January 2021 (UTC)
ELP
and ELPname
parameters will remain unchanged, and every article that doesn't have those parameters set will try to fetch them from the corresponding Wikidata entry and use those. Please let me know if you need more explanation. --
RexxS (
talk) 13:30, 28 January 2021 (UTC)Thanks, @ RexxS:! That looks great!
Where would we go to update the ELP values?
Could you generate a list of ELP ID's with single ISO codes that are not being triggered, so I could fix them manually? I've noticed severl, but would rather not search all 3000 to check.
Could you add a name to the refs so we could call them with <ref name=ELP/>, <ref name=ELP2/>? And could you add a link to Category:Language articles with manual ELP links
for articles that have a value in ELP? (I've done it for ELP2 in the template.)
A slight hiccup, when ELP is entered manually without ELPname, nothing displays. Something should show, if only to alert editors that the infobox needs to be fixed.
BTW, see Yauyos–Chincha Quechua, where there is a second, partial match. (The only ELP article said to be a subset match to an ISO code.) I used ELP2 to add that to the automated link.
Gelao language has up to ELP4. — kwami ( talk) 22:08, 28 January 2021 (UTC)
@ RexxS: Actually, I do prefer Wikidata, but I didn't know how & where to go about modifying it.
I think there will still be some need to augment it manually, though. In other language WP's, they may decide to follow ISO divisions where we do not, or have other differences in scope that would not be appropriate in WD. So, unless there's a work-around (I'm not familiar with WD), we should probably have the universal elements in WD for every WP to access, and then manual overrides when some particular WP wishes to diverge from that, for whichever reason. (E.g. deciding that ISO or ELP is inaccurate, based on the sources used for an article.) Wouldn't putting everything in Wikidata cause conflicts between different-language WPs?
Also, how can we generate a list of the ELP ID's that are called in WP-en, so I can fix the ones that aren't? — kwami ( talk) 01:17, 30 January 2021 (UTC)
That reduces it to about 500 articles I need to check by hand or manually add to WikiData. — kwami ( talk) 09:29, 2 February 2021 (UTC)
@ RexxS, Vahurzpu, and The Earwig: In the bottom section at Wikipedia talk:WikiProject Languages/List of ELP language names (#Names in the 'Languages with single ISO codes' ...) are the 500+ ELP name that should be linked from WP articles but aren't. Sometimes that's because the WP article covers more than one ELP language, but other times I don't see why there's no link. Maybe just a mismatch in names?
Would it be possible to add those ELP names & links to the WP articles through Wikidata? (To the WP articles that those blue-linked ELP names redirect to.) I've done a few manually, and can revert that once they're in Wikidata. — kwami ( talk) 04:20, 3 February 2021 (UTC)
|ELP=4225|ELP2=1744
to the template, we add |from=Q12953229|from2=Q12633994
, and the template pulls the ELP IDs from there. This has an advantage of making it easy to maintain other dialect identifiers if we choose to move more identifiers to Wikidata. I am not sold on this approach, but wanted to propose it. —
The Earwig ⟨
talk⟩ 07:17, 14 February 2021 (UTC)
There appear to be many many thousands of articles that are missing {{ use mdy dates}} or {{ use dmy dates}}, but which are linked to information that is sufficient to determine which tag should be used. For instance, I think we can safely assume that an untagged page for a high school in a subcategory of Category:High schools in the United States (or with country (P17) = United States of America (Q30) on Wikidata) ought to be using MDY, or that an untagged British biography page not in any categories for expatriates or dual nationality ought to be using DMY. The 3500 pages that use {{ American English}} but have no tag seem like an even easier call.
I'd like to see a bot that goes through old pages and adds the appropriate tags where it can make a firm determination. It would then operate periodically to add DMY or MDY tags to new pages as they are created (but would not override any pages tagged manually). This would help reduce the incidence of the ugly 2021-02-15 dates, and save some amount of editor work. It would be very low-risk, as even if there's some unforeseen circumstance that causes the bot to occasionally mess up, there's very little damage done (e.g. Americans can still understand DMY fine, likewise for Brits with MDY, and most would probably prefer either to YYYY-MM-DD) and correction would be easy.
Does anyone want to take this on? {{u| Sdkb}} talk 23:27, 15 February 2021 (UTC)
based on strong national ties to the topic, which would be the case for the categories here. {{u| Sdkb}} talk 05:43, 16 February 2021 (UTC)
Hi there,
I am looking to for a bot to update a company page [1] with shows that are released as per the corresponding imdb page [2], is this possible? I apologise if this is not the place for this kind of request, I am new to using bots.
Many thanks
MonkeyProdCo ( talk) 15:48, 16 April 2021 (UTC)
References
I'm very busy in real life, and I need a bot to do the editing for me, he will use AutoWikiBrowser and help with vandalism, edit warring, and other problems and situtations. This will not be my only bot, I am also requesting to have 3 bots. He will have a user page, although I do not have one. This will be a good faith bot. If he malfunctions Press the emergency shutoff button. — Preceding unsigned comment added by BCuzwhynot ( talk • contribs)
Hello, I was hoping that a new bot could be created to do what Hasteur Bot used to do which was to notify editors that their drafts were coming up on their 6 month period of no activity when they could be deleted as stale drafts (CSD G13). These notices were sent out after a draft had been unedited for 5 months. We have been missing this since the summer which has resulted in what I think is a higher number of draft deletions and a high volume of requests for restoration at WP:REFUND. I think oftentimes, editors forget that they have started a draft (especially those editors who start a lot of drafts simultaneously), and these reminder notices are very useful for page creators as well as for editors and admins who regularly patrol stale draft reports.
Would it be possible for a bot creator to just reuse code from Hasteur Bot? But I'm just looking for a bot that will do exactly what it used to before it was disabled due to the bot creator's passing. See Special:Contributions/HasteurBot for examples of what I'm looking for. Thank you. Liz Read! Talk! 00:18, 7 February 2021 (UTC)
I'd say more than a few articles are marked stubs but assessed differently by Wikiprojects. Is it a good idea to maintain a (possibly cached) list of these for maintenance purposes. — Preceding unsigned comment added by 5a5ha seven ( talk • contribs) 23:17, 19 February 2021 (UTC)
~~~~
. Or, you can use the
[ reply ] button, which automatically signs posts.)
GoingBatty (
talk) 23:45, 19 February 2021 (UTC)Sections with the names of the months of the year are repeated twice or thrice, in "Events", "Births", and "Deaths". To make section headings unique, I propose the following changes be made:
replace regex
(\w)( ?)===
with
$1 {{none|births}}$2===
or
$1 {{none|deaths}}$2===
depending on the section. JsfasdF252 ( talk) 17:31, 5 February 2021 (UTC); updated 17:38, 5 February 2021 (UTC)
Per MOS:BOLD, "boldface is applied automatically [in] [t]able headers. Manually added boldface markup in such cases would be redundant and is to be avoided.". Special:Search/insource:/\|\+ *'''/ currently returns over 17,000 results. I suggest replacing
\|\+( *)'''([^'\|]+)'''\|
by
|+$1$2|
in all of the occurrences. 𝟙𝟤𝟯𝟺𝐪𝑤𝒆𝓇𝟷𝟮𝟥𝟜𝓺𝔴𝕖𝖗𝟰 ( 𝗍𝗮𝘭𝙠) 22:13, 21 February 2021 (UTC)
I am proposing a bot that will do the following task: It will add the template {{R to section}} to all the redirects on Wikipedia that link to a section. Yesterday I made almost 200 edits trying to add the template, and I was later inspired to add this request by this diff. 🐔 Chicdat Bawk to me! 11:13, 23 April 2021 (UTC)
Based on this search, there appears to be around 50,000+ articles (mostly on populated places in Iran) created by User:Carlossuarez46 that use the incorrect capitalization of "romanized".
The word is capitalized when it means "to make something Roman in character", and lowercase when it means "convert to Latin script", as it does in these cases. This distinction is reflected across all of our articles ( Category:Romanization), and is supported by dictionaries [2], encyclopedias [3] [4] [5], etc.
If a bot task were made to fix this, it would also probably be prudent to retarget the wikilinks like so: [[Romanization of Persian|romanized]], which is a better target. — Goszei ( talk) 23:56, 17 February 2021 (UTC)
Languages Usually Transliterated (or Romanized)). The title of the previous section, for example, is written as
Languages Using the Latin Alphabet.
In nonspecialized works it is customary to transliterate—that is, convert to the Latin alphabet, or romanize—words or phrases from languages that do not use the Latin alphabet.— Goszei ( talk) 00:31, 18 February 2021 (UTC)
(Romanized as [one or more words in Farsi script])
to reduce the risk of false positives.
Nyttend (
talk) 15:21, 18 February 2021 (UTC)
{{lang-fa|بابصفحه}}
part of the article text could be detected to ensure that the context is correct. I'm no good at regex, but the string {{lang-fa|anything goes here}}, also
Romanized as"
would be the target for making the two changes (decapitalization of romanization, and changing the link target). There may be some cases left over, but likely a reasonable amount that is fit for an AWB pass (i.e. with human review) instead of a bot run. —
Goszei (
talk) 17:58, 1 March 2021 (UTC)Wikipedia currently contains source references in several languages to the websites TracesOfWar.com and .nl (EN-NL bilingual), but also to the former websites ww2awards.com, go2war2.nl and oorlogsmusea.nl. However, these websites have been integrated into TracesOfWar in recent years, so that the source reference is now incorrect in approximately 1,200 pages and a multiple of the source references. Fortunately, there is currently the situation in which ww2awards and go2war2 still link to the correct page on TracesOfWar (redirect), but this is no longer the case for oorlogsmusea.nl. I have been able to correct all the sources for oorlogsmusea.nl manually.
For ww2awards and go2war2 the redirects will stop in the short term, which will result in thousands of dead links, while it can be properly directed towards the same source. Now I have started to make some changes myself by converting sources for these 2 sites as well, but after about a dozen of changes I somewhat lose hope to do this manually at least 1,150 times.
Is there any way this could possibly be done in bulk? A short example: person Llewellyn Chilson (at Tracesofwar persons id 35010) now has a source reference to http://en.ww2awards.com/person/35010, but this must be https://www.tracesofwar.com/persons/35010/. In short, old format to new format in terms of url, but same ID.
In my opinion, that should make it possible to convert everything with format ' http://en.ww2awards.com/person/[id]' (old English) or ' http://nl.ww2awards.com/person/[id]' (old Dutch) to ' https://www.tracesofwar.com/persons/[id]' (new English) or ' https://www.tracesofwar.nl/persons/[id]' (new Dutch) respectively. The same applies to go2war2.nl, but with a different format slightly. The same has already been done on the Dutch Wikipedia, via a similar bot request. Is this possible? Lennard87 ( talk) 10:57, 29 April 2021 (UTC)
Is it possible for a bot to change the following project talkpage templates?:
{{
WikiProject Floods}}
to {{
WikiProject Weather|floods-task-force=yes}}
{{
WikiProject Weather Data and Instrumentation}}
to {{
WikiProject Weather|met-data-task-force=yes}}
{{
WikiProject Droughts and Fire Events}}
to {{
WikiProject Weather|droughts-and-wildfires-task-force=yes}}
It would be rather time-consuming to change all of these by hand and the templates dont exactly agree with the changes I made to avoid changing the talkpage templates. Noah Talk 22:48, 12 April 2021 (UTC)
{{
Flood}}
and {{
Weather-data}}
? Thanks!
GoingBatty (
talk) 05:09, 16 April 2021 (UTC)Please could someone replace ELs of the form
with
{{
Cite rowlett|bhs}}
which produces
Thanks — Martin ( MSGJ · talk) 05:38, 19 March 2021 (UTC)
{{
cite web}}
, plain links, and links with piped text, and with/without additional plain bibliographic notes. For example, 165 of the https:// form are in a "url=..." context. I think there are too many variations to do automatically.
DMacks (
talk) 15:06, 19 March 2021 (UTC)
SEE Wikipedia bot requests #internet archive. A guy replied about the bot preppery. Tell that bot owner to come here and look at this conversation
Is there an existing bot that could add the missing Template:Documentation to other templates' pages? Are there any prior discussions? -- DaxServer ( talk) 12:20, 2 May 2021 (UTC)
| content =
of
Template:Documentation#Usage. Before considering a bot you would need consensus to stop that practice. It's common for simple documentation.
PrimeHunter (
talk) 15:10, 2 May 2021 (UTC)
I want a robot to be able to add {{ Bare URL inline|{{ subst:DATE}}}}to the end of all bare URLs.-- Alcremie ( talk) 09:54, 11 March 2021 (UTC)
The Cleveland Clinic Journal of Medicine used to be free and open, but now it is free with registration. Can a bot be modified to make edits like
this, where I removed things like the | doi-access = free
parameter and I added the | url-access= registration
parameter? I imagine a complicating factor might be having the bot generate urls. This is my first bot request from memory so my apologies if you feel I'm wasting your time by making this request. Thank you.
Biosthmors (
talk) 06:58, 16 April 2021 (UTC)
I've recently gone through all ACW unit pages and manually adjusted pagenames in order to standardize usage according to this discussion. After I performed these page moves, this discussion made clear I needed to repetitively replace a vast number of category entries in the form "X Y Civil War regiments" toward "X Y Civil War units" on appropriate unit articles, including their container categories. The top-most container categories for these changes are located at Category:Regiments of the Union Army and Category:Regiments of the Confederate States Army, which themselves need to be changed to "Units of the Y Army". There are some current state ACW regiment categories which omit "Confederate States" and "Union" merely because no units raised in the state were part of that army (for example: "Vermont Civil War regiments" and "Mississippi Civil War regiments"). In all cases, "regiments" should be changed to the more inclusive plural noun "units". This would take several thousand word replacements (in category names) at a page level. Is this something a bot might do? Is there a better way? BusterD ( talk) 18:45, 21 April 2021 (UTC)
On articles protected by pending changes, if there are no pending edits to an article, then autoconfirmed users should be able to have their changes automatically accepted. There is currently a rather frustrating bug that causes some edits by autoconfirmed users to be erroneously held back for review: please see Wikipedia:Village pump (technical)#Pending Changes again and various phab tickets [6] [7]. Apparently, the flagged revisions/pending changes codebase is completely abandoned (no active developers who understand the code), and currently no timely fix to this issue is anticipated. As an interim stopgap measure while we attempt to find developers to fix the underlying software, would it be possible to create a bot that automatically accepts pending changes made by autoconfirmed users where they should have been automatically accepted by the software? Thanks, Mz7 ( talk) 23:41, 12 March 2021 (UTC)
+reviewer
so I can experiment with the relevant API calls. The bot account this task runs under will obviously need this eventually as well, but that's for after the BRFA of course.
ƒirefly (
t ·
c ) 11:03, 14 March 2021 (UTC)
BRFA filed ƒirefly ( t · c ) 17:33, 15 March 2021 (UTC)
This is a request specifically to benefit external wikis who import Wikipedia templates for their own use. Currently, the redirect {{ Doc}} is transcluded onto over 3000 templates. This means that any wiki which imports one of these templates will also get a template-space redirect that they may not want, or at least a redlink that they then have to fix (if they happen to care about that). I don't think this is explicitly covered by any of the points at WP:NOTBROKEN, but it feels to me at least to be within the spirit of the second-to-last "good reasons" point:
In other namespaces, particularly the template and portal namespaces in which subpages are common, any link or transclusion to a former page title that has become a redirect following a page move or merge should be updated to the new title for naming consistency.
If the community here decides (or has decided) that reuse on external wikis isn't a major enough concern to justify this type of change, that's fine. I personally think it's worth changing this, though as a reuser at one of those external wikis, I'm obviously biased here. =) 「 ディノ奴 千?!」 ☎ Dinoguy1000 07:01, 12 March 2021 (UTC)
I want to make bots — Preceding unsigned comment added by 2601:246:5980:6240:8D01:A7E1:9D68:EE5C ( talk) 16:49, 12 May 2021 (UTC)
Ask you to review the article "ٱفلام حصرية" Thanks — Preceding unsigned comment added by 196.151.130.174 ( talk) 12:33, 18 May 2021 (UTC)
Please CfD-tag all categories in the following bulk nomination(s):
– LaundryPizza03 ( d c̄) 03:39, 16 May 2021 (UTC)
The article about Estradiol as a hormones contains an non working external link in the references. On the reference number 71., the last external link as a PDF is citing the values from this source " Establishment of detailed reference values for luteinizing hormone, follicle stimulating hormone, estradiol, and progesterone during different phases of the menstrual cycle on the Abbott ARCHITECT analyzer".
This external link redirects to an 404 server error and needs to be replaces with an operating link. The original research document is available on the laboratory's website.
How to change this link? I don't know how to use a bot. I'm thankful for any help. — Preceding unsigned comment added by Jerome.lab ( talk • contribs) 13:11, 30 April 2021 (UTC)
The goal is to remove "color =" and "popularity=" parameters from {{ Infobox music genre}}. Color parameter was suppressed in January 2019 [8], while popularity was removed in 2013 [9], but they are still present in ~900 and ~300 templates respectively [10]. It would be great if we could clean up these templates. Solidest ( talk) 17:06, 3 March 2021 (UTC)
This did happen before, and is a bit of an issue. There are apparently some really old reports created by COIBot out there that are not NOINDEXed and which appear in the Google search results. As far as I know we did solve that some time ago in the robots.txt, but I am not sure whether those really old, untouched reports actually 'get' that properly set through (and I am not sure whether a bot-run is necessary).
Can a bot go through all pages under Wikipedia:WikiProject Spam/LinkReports, Wikipedia:WikiProject Spam/Local (so e.g. Wikipedia:WikiProject Spam/LinkReports/example.org etc.) and add {{ NOINDEX}} to any pages that are currently in the category for no-index (I would not know how to find that in the first place). The edit will then enforce the parsing of the page and make sure that from our side the pages are NOINDEXed. If all are NOINDEXed all of them should probably be purged. We could consider to delete them, but some are representations of evidence that is used for the decisions to blacklist (but nothing is really lost, the bot can recreate them with data over the last 10 years, and since admins are handling the cases they can always see deleted revids.
A second step would be to contact Google to remove those that have not been re-indexed (and hence removed) by Google from the Google database, but that is probably something that needs to be done on a case-by-case basis so also not a bot task. It is an advice that we then can give to anyone who 'complains'. Thanks. -- Dirk Beetstra T C 13:52, 2 May 2021 (UTC)
Hi, I have seen many articles that seem to get WP:LAYOUT wrong. For example, placing 'See also' after 'References', 'External links' placed before 'References'. I think it is easy for bots to read the layout and correct them. Perhaps, an existing bot can be programmed to do that. Regards-- Chanaka L ( talk) 10:42, 1 April 2021 (UTC)
== *References *==[^=]+== *See also *==
and saves only if this code no longer exists after applying the genfixes. A similar bot could look for articles with == *External links *==[^=]+== *References *==
that saves only if the code no longer exists after applying the genfixes. What do you think?
GoingBatty (
talk) 04:18, 7 May 2021 (UTC)
The Dispute Resolution Noticeboard has had a bot-maintained table of the status of cases for several years, and this table can be transcluded onto user pages, and onto the main status page of DRN. This table should be updated a few times a day. This task was previously done by User:HasteurBot, but that bot has been retired from service because its operator is no longer in this world. This task was, until about ten days ago, done by User:MDanielsBot, but that bot has stopped doing that task. It is doing other tasks, but not that task. Its bot operator is on extended wikibreak and did not respond to email. I have spoken to one bot operator who is looking into this task. Robert McClenon ( talk) 16:21, 13 April 2021 (UTC)
Please tag all of the following articles included in Wikipedia:Articles for deletion/List of names of European cities in different languages:
– LaundryPizza03 ( d c̄) 21:27, 17 June 2021 (UTC)
Could a bot be created to add/update {{
Top 25 report}} to the talk pages of pages featured in the
Top 25 reports, it would be useful if the bot could also do the annual top 50 report and if the bot could go through the old top 25 reports as a few are missig thier talk page banners. Thanks,
SSSB (
talk) 09:20, 18 April 2021 (UTC)
Please, rename "Stadio Pierluigi Penzo" to "Stadio Pier Luigi Penzo" in this pages. Thanks in advance!!! -- 2001:B07:6442:8903:D4D:F67B:CF18:C681 ( talk) 13:36, 14 June 2021 (UTC)
Hi. I'm looking to revive a request previously made in 2018, which was discussed (to a considerable extent) here and here. Back then, TheSandDoctor originally offered to help, but due to other circumstances was unable to devote time to the task, and suggested that I ask here again. I've left it for quite some time, but better late than never I guess.
Briefly, names should be sorted by given name (i.e. as they appear) in Thailand-related categories. A Thai biography footer should as such contain the following:
{{DEFAULTSORT:Surname, Forename}} [[Category:International people]] [[Category:Thai people|Forename Surname]]
Currently, compliance is all over the place, with the Thai order being placed in the DEFAULTSORT value in some articles, and the Thai sort keys missing in others. A bot is needed to: (1) perform a one-time task of checking DEFAULTSORT values in Thailand-related biographies (a list with correct values to be manually supplied), and replacing the values if incorrect, and (2) do periodical maintenance by going through a specified list of categories (probably via a tracking template placed on category pages) and adding the Thai name order as sort keys to those categories' calls in each member article that is a biography. In most cases, the Thai name order would be the page title, but there are several exceptions, which I will later elaborate upon. This had been advertised and some details of the task ironed out back then, but since it's been three years there may be need to reaffirm consensus. I would like to see first, though, whether any bot operators are interested in such a project. -- Paul_012 ( talk) 00:13, 26 February 2021 (UTC)
{{DEFAULTSORT:Surname, Given name}}
[[Category:Category name|Given name Surname]]
[[Category:Category name|Category Sort key specified]]
{{DEFAULTSORT:Surname, Given name}}
[[Category:Category name|Given name Surname]]
Maybe I should provide a bit more background first. The short answer your last question would be, "No." To get the long answer, I went through the about 4,000 Thai people articles to identify the following patterns:
lengthy name examples
|
---|
|
I guess all this is to say it's probably far too complicated for the defaultsort value to be automatically processed; reading off a manually compiled list would be more practical. I'm still tweaking the list but see for example an earlier (outdated) version at Special:Permalink/829756891.
I think the process should be something more like:
[[Category:Category name|PAGENAME]]
(though format the page name to exclude parenthetical disambiguators)The above applies to the bot's initial run. There should also be periodical update runs, where 2.1 would be:
[[Category:Category name|PAGENAME]]
(though format the page name to exclude parenthetical disambiguators)Category recursion is tricky and can lead to unexpected problems, so {{ Thai people category}} should probably be placed directly on all applicable category pages. (That may also be a bot task.) I'm working off this preliminary list: Special:Permalink/1011801926, but some further tweaks my still be needed.
Since the Thai sort key will be the same as either the article title (for regular names) or the DEFAULTSORT value (for royalty, etc.), the DEFAULTSORT_UPDATE_LIST can note which case applies to each article, and this can be tracked in the article source. I think this would be preferable in the long run, as a central list will be hard to keep updated while a tracking template can be added to new articles as they are created. {{ Thai sort same as defaultsort}} wouldn't need to generate any visible output (except maybe a tracking category if useful).
Does this more or less make sense? -- Paul_012 ( talk) 23:15, 12 March 2021 (UTC)
{{Thai sort same as defaultsort}}{{DEFAULTSORT:Surname, Given name}}
(nothing between the template and DEFAULTSORT)? --
Kanashimi (
talk) 01:26, 13 March 2021 (UTC)[[Category:Category name|PAGENAME]]
(though format the page name to exclude parenthetical disambiguators)Given-name Surname
as the DEFAULTSORT (which the bot will need to correct) is quite old (mostly found in articles from over a decade ago I think). New articles today will likely have DEFAULTSORT values in the Surname, Given-name
format, so will only need PAGENAME sort keys added. The minority of articles which require specific formatting and tagging can be handled by patrollers following WikiProject Thailand's potential new articles feed as they are created. --
Paul_012 (
talk) 09:13, 13 March 2021 (UTC)
I've opened a discussion requesting community input at Wikipedia talk:Categorization of people#Bot for Thai name category sorting. I've now also listed the categories and articles at Wikipedia:WikiProject Thailand/Thai name categories and Wikipedia:WikiProject Thailand/Thai name sort keys. -- Paul_012 ( talk) 18:54, 16 March 2021 (UTC)
@ Paul 012: Sorry, it seems there are some pages modified during the interval we waiting for the task approved. Can you check and update Wikipedia:WikiProject Thailand/Thai name sort keys again? Thank you. For example,
And how do we deal with the pages moved in the future? When the pages moved, the sort key will not follow the changing. -- Kanashimi ( talk) 00:36, 31 May 2021 (UTC)
I know I'm late to the party but would it make any sense to sort the Thai names as [[Category:Thai foos|{{PAGENAME}}]]
(literally
the word PAGENAME in braces) so they will update automatically if the page name changes? That would include parenthetical qualifiers, but consistently sorting Foo bar (footballer born 1900) before Foo bar (footballer born 2000) might not be a bad thing. Non-Thai names in Thai categories could either follow suit to sort consistently (often by given name) or simply omit the sort code to sort by DEFAULTSORT (normally surname).
Certes (
talk) 09:45, 2 June 2021 (UTC)
Paul_012 I start running the routine version, it modifying 2 pages. Please check this round. -- Kanashimi ( talk) 23:31, 3 June 2021 (UTC)
There's current a boatload of raw IPv4/ IPv6 address used in URLs, instead of something legit useful to readers. Is there a way to parse/update a link like
to
by bot? Or something similar/close to this? I fully expect most such links to not be recoverable, but there could be a few that are. Headbomb { t · c · p · b} 23:01, 28 May 2021 (UTC)
google.com
just resolved to 172.217.7.14
for me, and while
https://172.217.7.14/search?q=earwig seems to work and could theoretically end up in an article somehow, that IP's reverse DNS is lga25s56-in-f14.1e100.net
which is clearly not what we want. Certainly tools could be used to build lists of possible replacements that could be manually reviewed, and a bot could perhaps operate on that, or we could use an existing method like
WP:URLREQ. —
The Earwig (
talk) 00:18, 29 May 2021 (UTC)
bot for linking stuff in wikipedia
FizzoXD (
talk) 03:31, 9 June 2021 (UTC)
By linking things i mean like linking to other wikipedia articles. Like if there is a word link "
internet meme" the bot would like it to the page by editing it.
FizzoXD (
talk) 05:31, 9 June 2021 (UTC)
I would like to request that a bot starts putting a project-box on all articles that appear in the Wikipedia:The 2500 Challenge (Nordic). Plenty of other projects has this kind of box at the articles talk pages like at Talk:Gunnar Seijbold. The project is growing bigger. BabbaQ ( talk) 11:59, 5 May 2021 (UTC)
Hopefully once a bot has gone through those articles, there may only be a few additional cases that I can manually fix. Unfortunately 700 is too much for me to do manually :(. Thanks I hope! -- Tom (LT) ( talk) 09:45, 2 April 2021 (UTC)
I really thought we had a bot or several working on this, and it seems it was brought up as recently as last year, but I just had to make yet another manual fix, so... We really need a bot that reliably fixes section links when section names are changed. Preferably one that stays online for more than a few weeks before it stops working. understatement {{u| Sdkb}} talk 06:58, 21 May 2021 (UTC)
|notify=Ladsgroup
to the bot's config line on
Wikipedia:Bot activity monitor/Configurations. –
SD0001 (
talk) 07:06, 21 May 2021 (UTC)
{{ demo inline}} is similar to {{ tbullet}}, but the former supports an infinite amount of named and unnamed parameters. {{ tbullet}} is more widely used, but can only support 6 unnamed parameters. I suggest replacing this:
{{tbullet|t|1|2|3|4|5|6}}
with this:
* {{demo inline|<nowiki>{{t|1|2|3|4|5|6}}</nowiki>}}
JsfasdF252 (
talk) 22:27, 24 April 2021 (UTC)
Hi all, the WP:Featured and good topic candidates promotion/demotion/addition process is extremely tedious to do by hand, and having a bot help out (akin to the FAC and FLC bot) would do wonders. Unfortunately, this would have to be a rather intricate bot—see User:Aza24/FTC/Promote Instructions for an idea of the promotion process—so I don't know if many would be willing to take it up. But regardless, such a bot is long over due, and its absence has resulted in myself, Sturmvogel 66 and GamerPro64 occasionally delaying the promotion process, simply because of the discouraging and time consuming manual input needed. I can certainly provide further information on the processes were someone to be interested. Aza24 ( talk) 01:14, 4 May 2021 (UTC)
{{ demo inline}} is similar to {{ tbullet}}, but the former supports an infinite amount of named and unnamed parameters. {{ tbullet}} is more widely used, but can only support 6 unnamed parameters. I suggest replacing this:
{{tbullet|t|1|2|3|4|5|6}}
with this:
* {{demo inline|<nowiki>{{t|1|2|3|4|5|6}}</nowiki>}}
JsfasdF252 (
talk) 22:27, 24 April 2021 (UTC)
Following up from
this discussion about converting links to Wikimedia commons from http → https, it was decided a better option is to convert "external" links (i.e only those enclosed in [...]) to interwiki links since it provides better protection against
WP:LINKROT. For example [http://commons.wikimedia.org/wiki/File:Example.jpg Wikimedia commons]
→ [[:commons:File:Example.jpg|Wikimedia commons]]
There are currently about 4,100 main space pages that use http or https external link to commons. Most of them can be replaced with interwiki link. ಮಲ್ನಾಡಾಚ್ ಕೊಂಕ್ಣೊ ( talk) 17:58, 28 May 2021 (UTC)
[[:commons:File:Example.jpg|Wikimedia commons]]
?
Primefac (
talk) 20:04, 28 May 2021 (UTC)
There are 210 talk pages that transclude both {{ GA}} and {{ article history}}. A bot could integrate the GA template data into the latter, to reduce template clutter. – SD0001 ( talk) 18:15, 1 June 2021 (UTC)
The March 2021 cleanup backlog for the Medicine WikiProject is currently dead links on articles that start with the letter A. About a quarter/third or so of the list was dead links in "cite journal" and "cite book" templates ( Template:Cite journal and Template:Cite book) that contain identifiers such as ISBN, DOI, or PMID. A URL is not necessary in these references because identifiers are used. Using the March backlog as a sample and considering the size of the dead link category for the Medicine WikiProject as a whole (currently around two thousand), there are potentially thousands of dead links site-wide that fall into this type of dead link. Removing a single one of these dead links is simple but finding all of them and making a large number of tedious edits is very time-consuming, so this seems like a task a bot could do. Note that |access-date and other URL-related parameters would also be removed. An example of what the bot edits would look like. Velayinosu ( talk) 04:09, 25 March 2021 (UTC)
|doi=
, |jstor=
, |pmc=
, or |pmid=
). If you have questions/concernes, let me know. Thanks!
Ajpolino (
talk) 15:12, 14 May 2021 (UTC)|url=
but there are many other places in a template a URL might be located. See the CS1|2 Lua Configuration source and search on "url". Since it has a {{
dead link}}
it unlikely to have a |archive-url=
+ |archive-date=
+ |url-status=
.. but I have seen it, the possibility exists, they should be removed as well. Let's see.. it could end up removing a dead URL that can be saved via Wayback and this Wayback contains the full PDF, while the DOI link doesn't contain the full PDF. One way to tell is if the template has a |doi-access=free
which flags the full PDF is freely available at the identifier-produced URL. Pinging Nemo who is more knowledgeable.. @
Nemo bis:. --
Green
C 16:53, 14 May 2021 (UTC)
|doi-access=
is used relatively rarely (unless a bot has been adding it?), but I aagree it's the most straightforward task. So let's see how wide a net that is, and if we then want to test a broader set of restrictions we can do so. Thanks again William Avery!
Ajpolino (
talk) 20:32, 20 May 2021 (UTC)I am looking for anyone to take a task of replacing signatures of
PumpkinSky (
talk ·
contribs). Their old signature had <font>...</font>
tags which are creating obsolete html tag Lint errors in all pages that have their signature.
Regex search shows that the signature is currently in 1,081 pages across namespaces. To remove the errors, the font tags need to be replaced with span tags.
[[User:PumpkinSky|<font color="darkorange">Pumpkin</font><font color="darkblue">Sky</font>]] [[User talk:PumpkinSky|<font color="darkorange">talk</font>]]
need to be replaced with [[User:PumpkinSky|<span style="color: darkorange;">Pumpkin</span><span style="color: darkblue;">Sky</span>]] [[User talk:PumpkinSky|<span style="color: darkorange;">talk</span>]]
ಮಲ್ನಾಡಾಚ್ ಕೊಂಕ್ಣೊ ( talk) 16:47, 11 May 2021 (UTC)
Extended content
|
---|
str = str.replace(/<font colou*r *= *["']* *([#a-z\d ]+)["']* *>([a-z\d_— \'&;:!°\.#ταλκ\(\)\-\?\.,\!ößáàăâåäãāảạæćČçĐéèêếềễěëėęēệíìîïİįīịıĽńñóòôỗöõøōơờọœřšŞúùûüũūưứýỳ¡§:\@\!\?\&⊕⊗会話投稿記録日本穣投稿րևանցիԵ]+)<\/font>/gi, '<span style="color:$1;">$2<\/span>'); str = str.replace(/<font style="colou*r:["']* *([#a-z\d ]+)["']* *>([a-z\d_— \'&;:!°\.#ταλκ\(\)\-\?\.,\!ößáàăâåäãāảạæćČçĐéèêếềễěëėęēệíìîïİįīịıĽńñóòôỗöõøōơờọœřšŞúùûüũūưứýỳ¡§:\@\!\?\&⊕⊗会話投稿記録日本穣投稿]+)<\/font>/gi, '<span style="color:$1;">$2<\/span>'); str = str.replace(/<font colou*r *= *["']* *([#a-z\d ]+)["']* size="*([\dpxem\. ]+)"* *>([a-z\d_— \'&;:!°\.#ταλκ\(\)\-\?\.,\!ößáàăâåäãāảạæćČçĐéèêếềễěëėęēệíìîïİįīịıĽńñóòôỗöõøōơờọœřšŞúùûüũūưứýỳ¡§:\@\!\?\&⊕⊗会話投稿記録日本穣投稿]+)<\/font>/gi, '<span style="color:$1; size:$2;">$3<\/span>'); str = str.replace(/<font face *= *"* *([a-z ]+)"* *>([a-z\d_— \'&;:!°\.#ταλκ\(\)\-\?\.,\!ößáàăâåäãāảạæćČçĐéèêếềễěëėęēệíìîïİįīịıĽńñóòôỗöõøōơờọœřšŞúùûüũūưứýỳ¡§:\@\!\?\&⊕⊗会話投稿記録日本穣投稿]+)<\/font>/gi, '<span style="font-family:\'$1\';">$2<\/span>'); str = str.replace(/<font colou*r *= *["']* *([#a-z\d ]+)["']* face= *"* *([a-z ]+)"* *>([a-z\d_— \'&;:!°\.#ταλκ\(\)\-\?\.,\!ößáàăâåäãāảạæćČçĐéèêếềễěëėęēệíìîïİįīịıĽńñóòôỗöõøōơờọœřšŞúùûüũūưứýỳ¡§:\@\!\?\&⊕⊗会話投稿記録日本穣投稿]+)<\/font>/gi, '<span style="color:$1; font-family:\'$2\';">$3<\/span>'); str = str.replace(/<font face= *"* *([a-z ]+)"* colou*r *= *["']* *([#a-z\d ]+)["']* *>([a-z\d_— \'&;:!°\.#ταλκ\(\)\-\?\.,\!ößáàăâåäãāảạæćČçĐéèêếềễěëėęēệíìîïİįīịıĽńñóòôỗöõøōơờọœřšŞúùûüũūưứýỳ¡§:\@\!\?\&⊕⊗会話投稿記録日本穣投稿]+)<\/font>/gi, '<span style="font-family:\'$1\'; color:$2;">$3<\/span>'); str = str.replace(/<font style *= *"color:([#a-z\d ]+);" *>([a-z\d_— \'&;:!°\.#ταλκ\(\)\-\?\.,\!ößáàăâåäãāảạæćČçĐéèêếềễěëėęēệíìîïİįīịıĽńñóòôỗöõøōơờọœřšŞúùûüũūưứýỳ¡§:\@\!\?\&⊕⊗会話投稿記録日本穣投稿]+)<\/font>/gi, '<span style="color:$1;">$2<\/span>'); str = str.replace(/<font style *= *"([:#a-z\d ;\.\-]+)" *>([a-z\d_— \'’&;:!°\.#ταλκ\(\)\-\?\.,\!ößáàăâåäãāảạæćČçĐéèêếềễěëėęēệíìîïİįīịıĽńñóòôỗöõøōơờọœřšŞúùûüũūưứýỳ¡§:\@\!\?\&⊕⊗会話投稿記録日本穣投稿]+)<\/font>/gi, '<span style="$1">$2<\/span>'); str = str.replace(/(\[\[User:[a-z\d_— \'&;:!°\.#ταλκ\(\)\-\?\.,\!ößáàăâåäãāảạæćČçĐéèêếềễěëėęēệíìîïİįīịıĽńñóòôỗöõøōơờọœřšŞúùûüũūưứýỳ¡§:\!\?\&⊕⊗会話投稿記録日本穣投稿]+\|)<font colou*r *= *["']* *([#a-z\d ]+)["']*>([a-z\d_— \'&;:!°\.#\(\)\-\?ößáàăâåäãāảạæćČçĐéèêếềễěëėęēệíìîïİįīịıĽńñóòôỗöõøōơờọœřšŞúùûüũūưứýỳ¡§:\!\?\&⊕⊗会話投稿記録]+)<\/font> *(\]\])/gi, '$1<span style="color:$2;">$3<\/span>$4'); str = str.replace(/(\[\[User[ _]talk:[a-z\d_— \'&;:!°\.#ταλκ\(\)\-\?\.,\!ößáàăâåäãāảạæćČçĐéèêếềễěëėęēệíìîïİįīịıĽńñóòôỗöõøōơờọœřšŞúùûüũūưứýỳ¡§:\!\?\&⊕⊗会話投稿記録日本穣投稿]+\|)<font colou*r *= *["']* *([#a-z\d ]+)["']*>([a-z\d_— \'&;:!°\.#ταλκ\(\)\-\?ößáàăâåäãāảạæćČçĐéèêếềễěëėęēệíìîïİįīịıĽńñóòôỗöõøōơờọœřšŞúùûüũūưứýỳ¡§:\!\?⊕⊗会話投稿記録]+)<\/font> *(\]\])/gi, '$1<span style="color:$2;">$3<\/span>$4'); str = str.replace(/(\[\[Special:Contributions\/[a-z\d_— \'&;:!°\.#ταλκ\(\)\-\?\.,\!ößáàăâåäãāảạæćČçĐéèêếềễěëėęēệíìîïİįīịıĽńñóòôỗöõøōơờọœřšŞúùûüũūưứýỳ¡§:\!\?\&⊕⊗会話投稿記録日本穣投稿]+\|)<font colou*r *= *["']* *([#a-z\d ]+)["']*>([a-z\d_— \'&;:!°\.#ταλκ\(\)\-\?\.,\!ößáàăâåäãāảạæćČçĐéèêếềễěëėęēệíìîïİįīịıĽńñóòôỗöõøōơờọœřšŞúùûüũūưứýỳ¡§:\!\?\&⊕⊗会話投稿記録日本穣投稿]+)<\/font> *(\]\])/gi, '$1<span style="color:$2;">$3<\/span>$4'); |
Thank you! -- ExperiencedArticleFixer ( talk) 16:41, 3 June 2021 (UTC)
Hey, was wondering if anyone here could help me extract the data from the templates listed at Template:Subatomic particle/symbol and Template:Subatomic particle/link and post in one of my user pages? The data in each template is a single line so there is nothing special here. -- Gonnym ( talk) 17:09, 13 July 2021 (UTC)
See PINOFF, my earlier request. The bot would look for pages with large numbers of templates like {{ Citation needed}} and add them to a list in its own userspace. It would never edit pages outside its own userspace; users who wanted to view the output would just transclude the page via template. Does this sound like something that needs a bot? 'Ridge( Converse, Create, & Fascinate) 15:17, 22 June 2021 (UTC)
I am requesting for a bot to go through userpages so that a list is created of pages where its users have made edits only to their pages.
The intent is to tag these user pages as {{ Db-notwebhost}}. Catchpoke ( talk) 00:46, 22 April 2021 (UTC)
Some of the newly created pages are orphaned pages, but they have not been marked with orphaned templates, is tagbot that can do this?-- q28 ( talk) 08:44, 9 June 2021 (UTC)
There are 298 list articles in Category:Lists of National Football League draftees by college football team. Of these, approximately 121 articles contain a table of records which are not in correct chronological order (they contain newer "2021" rows on top of older "19XX" rows).
Currently, all 121 articles in need of one-time cleanup contain sections with {{Chronological|sect=yes|date=June 2021}}
. So that is potentially a hook to key off of.
There are two cases where automated cleanup should update the table to cause render in chronological order (oldest YYYY rows first, newest YYYY rows last, AND preserve the existing top-bottom sequence of rows within a given year's draft by Round/Pick/Overall ).
Note, there might be a few per-article variations which do not use a section name of #Selections
Here are the other cases to consider, where no bot modification is desired:
If this can be automated, I am happy to manually inspect all 298 articles and fix/revert any missed edge cases to stable where necessary. Updating these manually would be very slow and prone to error. The scope is unlikely to ever be done, even with WP:NFL project participation. So any automation would be an enormous time-saver and win. Cheers, UW Dawgs ( talk) 23:41, 9 June 2021 (UTC)
Done @ UW Dawgs: This bot couldn't help with List of Florida State Seminoles in the NFL Draft. William Avery ( talk) 12:32, 22 July 2021 (UTC)
As an AfC reviewer I come across many draft articles with a disproportional ratio of references to the prose text. A healthy number of such draft articles are on subjects that ultimately turn out to be notable, i.e. authored by newcomer editors with good intentions who are simply oblivious of WP:OVERCITE.
By developing and launching a bot that would show a warning notice for editors trying to submit a draft article triggering WP:OVERCITE filters, we would:
Suggested filters that would trigger this notice could vary and be based on a community consensus. Examples of filters:
Example text for the notice: "It seems like your draft is using too many references. Please keep in mind that draft articles are not accepted based on the number of references provided. To the contrary, citation overuse can delay review or even be a reason for a decline. Please see WP:OVERCITE and consider editing your draft accordingly." nearlyevil 665 06:24, 12 June 2021 (UTC)
is an essaythat
contains the advice or opinions of one or more Wikipedia contributors. This page is not an encyclopedia article, nor is it one of Wikipedia's policies or guidelines, as it has not been thoroughly vetted by the community. Some essays represent widespread norms; others only represent minority viewpoints.
References
There are many list type articles, or articles containing a bibliography, or scientific details, where a large number of citation is essentia, and it is the accepted practice in medical articles to give what would probably be considered an excessive number in any other field. Any notice that might lead to people removing such references would be giving exactly the wrong advice. There are however several real problems, but I do not see how they are capable of easy solution by bot.
I do not think articles are often declined on this reason alone; and if they are, it is incorrect, and should be brought to attention of the deleting editor or if necessary at Deletion Review. Rather, the inclusion of excess referencing is often a sign of promotional editing, or editing by a fan. It is bad style, and while it is neve correct to delete for style alone, bad style often indicates problems, and will certainly cause an article to be looked at carefully--perhaps even hyper-carefully. It's not currently concentrated on women; rather, a few years ago some of those running editathons and projects on undercovered areas were somewhat careless of ensuring that the articles written were of more than borderline notability . This did encourage the tendency of some editors with traces of misogeny to be over-critical in this area. But those running such projects have learned, and so have most of the misogynists. DGG ( talk ) 06:48, 16 June 2021 (UTC)
I'm not really sure what bot is involved with this but we have had ongoing problems with Category:AfC G13 eligible soon submissions. When things are running normally, it holds between 4,000 -5,000+ draft articles that are between 5 and 6 months since their last edit. When they hit 6 months without a human edit, they are deleted per CSD G13. Also, reviewers from AFC (particularly DGG) go through this category and "rescue" promising drafts and postpone their deletion and sometimes even move good drafts into main space.
What has happened this past year is that this category starts going down to 3,000 drafts, 2,000 drafts, 1,000 drafts and now it is only holding 478 expiring drafts. When this has happened in the past, I have asked ProcrastinatingReader for help and he has been able to do some technical task that causes the category to, over a few days, fill back up again. Right now though, he can't get to this task and advised me to come here and ask for help.
I have little information to offer beyond a description of the problem. I have no idea what bot or template categorizes these drafts or what ProcrastinatingReader did to fix this problem. I know that having categories filled has been an ongoing problem because I have brought the issue to the Village Pump and other individuals several times over the past few years. So, I'm not sure what the fix would be. If you could find a permanent solution, that would be awesome. Thank you. Liz Read! Talk! 02:20, 16 June 2021 (UTC)
Sounds like a task for User:Joe's Null Bot. According to toolforge:sge-jobs/tool/nullbot it's still operational, despite the warning on its page - FASTILY 22:16, 7 July 2021 (UTC)
I created a custom task to process this in a more simple manner. No onwiki list to manage (like User:ProcBot/PurgeList2) but this also means it will actually complete its runs. It'll run once a week. Current category count is just over 3k. It purges those three cats listed on User:ProcBot/PurgeList2, which I presume together contain all AfC active drafts. It will update the G13-expiring-soon for pages in those three categories only. I still advise moving to a more sophisticated DB-generated system, such as SD's list. ProcrastinatingReader ( talk) 11:26, 9 July 2021 (UTC)
Pls add my name bots user. Hind ji ( talk) 06:19, 29 July 2021 (UTC)
I regularly come across The New York Times in articles written only as "the New York Times". Would it be possible to have a bot find all instances of "New York Times", and ensure that the "the" before the instance is not only capitalized, but also italicized as part of the proper name of the publication? This could then also be repeated for other publications that the "The" is part of the proper name (The Boston Globe, The Herald Journal, The News Courier, The Plain Dealer, etc.) after/later? - Adolphus79 ( talk) 08:01, 22 July 2021 (UTC)
According to the New York Times article "Ibsen or Shakespeare?" (March 18, 1928), Harrison Grey Fiske was 12 years old when he first set eyes on the future Mrs. Fiske—she was but eight, performing in a Shakespearean role.
("a The NYT article") or ... ("the The NYT article"). – Jonesey95 ( talk) 21:20, 22 July 2021 (UTC)
leading article may be dropped when the title is used as a modifier: According to a New York Times article by .... Certes ( talk) 00:43, 23 July 2021 (UTC)
I'm not going to disagree with concensus, nor ask the bot(master)(s) for an impossible task. It was just an idea I had at 4AM... lol - Adolphus79 ( talk) 02:34, 23 July 2021 (UTC)
Hey, Bot folks,
I accidentally happened upon a user talk page where the user page had been deleted almost exactly four years ago (see User talk:Bmoy94/sandbox/Innova Market Insights) but the user talk page was not deleted. This was a surprise to me because we have database reports for orphaned talk pages (see Wikipedia:Database reports/Orphaned talk pages and Wikipedia:Database reports/Orphaned file talk pages). Are User Talk pages exempted from these reports?
Obviously, this is not an urgent problem but it would be useful if there was a bot report for orphaned User Talk pages as well. Right now, many deletions are done with Twinkle which will delete redirects but not the redirect talk pages. If it is a regular article redirect talk page, it can show up on the orphaned talk page report or the broken redirects report but not all talk pages of redirects are also redirects and some redirects are from User pages. I don't think there are hundreds of these pages out there but it would be useful if, like the orphaned file talk page report, this could become a weekly report that is done. Thank you for considering this request. Liz Read! Talk! 22:46, 24 June 2021 (UTC)
COUNT(*)
with the page titles.
GoingBatty (
talk) 05:02, 25 June 2021 (UTC)
I thought it would be "easy" to download the current list of titles (enwiki-20210620-all-titles.gz from here) and do some clever stuff to find orphaned talk pages. Problem: my first run found 16,228,343 orphaned pages! Some superficial checking showed that most of those were due to things like Talk:Example/Archive_1 or .../GA1 or .../FA1 or .../Test or .../OtherStuff. When I finally found a few dozen orphaned talk pages, they were tagged as "keep" (in Category:Wikipedia orphaned talk pages that should not be speedily deleted), examples Talk:Qazwsxedcrfvtgbyhnujmikolp, WT:AOTM, Category talk:Films about hebephilia, Draft talk:Anirban Sengupta. Then there are weird redirects like WT:MOS:VG. I finally found a junk page: TimedText talk:Constitution.ogg (and probably a few more). I'm posting this to let anyone wanting to take the job on that quite a lot of pruning of results would be required. Johnuniq ( talk) 11:16, 25 June 2021 (UTC)
We recently had a discussion at Template talk:Infobox person#Deprecating the net worth parameter? and it was decided to remove the parameter. Would it be possible to get a list of all the infoboxes that use the parameter. Also could it be go back to July 11 rather than the current date since we have someone who is removing all the parameters as we speak? Thanks! Patapsco913 ( talk) 16:00, 18 July 2021 (UTC)
Please remove these pages (that contain {{ pec}} and the related templates and use the template {{ Category class}}) per the discussion at Template talk:Category class. Qwerfjkl talk 21:41, 13 August 2021 (UTC)
Hello. I'm a marine biologist specialized in Echinodermata. I would like to be informed of any new picture of these animals so I can review the identification and, when useful, add them to the relevant Wikipedia articles. But as there are over 7000 species of them, of course I can't check all the categories every day. I used to benefit from Ogrebot's newsfeed for a long and useful time but it is no longer working. Do you guys know any other way I could get such uploading newsfeed ? Thanks and best regards, FredD ( talk) 14:23, 8 June 2021 (UTC)
Hi all, the WP:Featured and good topic candidates promotion/demotion/addition process is extremely tedious to do by hand, and having a bot help out (akin to the FAC and FLC bot) would do wonders. Unfortunately, this would have to be a rather intricate bot—see User:Aza24/FTC/Promote Instructions for an idea of the promotion process—so I don't know if many would be willing to take it up. But regardless, such a bot is long over due, and its absence has resulted in myself, Sturmvogel 66 and GamerPro64 occasionally delaying the promotion process, simply because of the discouraging and time consuming manual input needed. I can certainly provide further information on the processes were someone to be interested. Aza24 ( talk) 01:14, 4 May 2021 (UTC)
Responding in order:
Hi there, I've been occasionally trying to chip away at the articles in the Category:Infoboxes with unknown parameters. Some of these categories are... beefy to say the least, some of the standouts are Film with 11.2k, German location with 9.8k, Officeholder with 3.5k, Organisation with 3k, Scientist with 7.4k, Settlement with 10.5k and Sportsperson with 3.7k. Currently, the only way to really tell which parameter is causing the issue is to attempt to edit the article and either check the infobox parameters or save the preview so it appears at the top of the page. You are at least given an idea of what to look for by the sortkey it appears under in the category, but I don't think this works when there are multiple errors properly, which results in having to consult the preview regardless. I'm not certain but I feel like a lot of these ones especially are either deprecated parameters that PrimeBOT might be able to handle or simple misspellings or other issues that could be handled with AWB or other similar tools, such as missing underscores and dashes and alt test and image sizes being separated with a pipe.
Since this requires information to be grabbed, it seemed like a bit more than an SQL query would be necessary, I was thinking of some sort of bot that could generate a report maybe that could be linked to from the category page. I'm not thinking anything too complicated (in my uneducated opinion, I think), just something that lists the page in the left column and the broken parameter in the right, you could sort both columns (so by article title or broken parameter), and this would make it much easier to visibly see where there is a great amount of overlap in broken parameters to more speedily clear out these categories.
I hope that makes sense, but it would hopefully assist the relevant WikiProjects in being able to clean up their respective articles as well, and potentially allow for these parameters to be either added in as aliases or if there is significant usage within a template, maybe even have an underused parameter modified to call an already existing name used in the majority. Thank you if anyone is willing or able to help out! -- Lightlowemon ( talk) 12:22, 28 June 2021 (UTC)
All German state broadcasters have to follow a 2009 law that they need to delete all online content after a year, so as not to "disadvantage" commercial news corporations. ("12.
Rundfunkänderungsstaatsvertrag" [
de], 1 June 2009)
This has big consequences for Wikipedia when they cite news from German state broadcasters:
It means legally mandated automatic link rot for such sources.
I suggest a bot that recognizes when
such a broadcaster is cited and automatically requests a save point from the Internet Archive, then links the save point in the ref.
Also see the Depublication [ de]: The whole German article is about this novel concept brought up the 2009 law.-- ΟΥΤΙΣ ( talk) 00:26, 27 June 2021 (UTC)
Pinging User:Marchjuly. We had a short discussion at Wikipedia talk:Files for discussion#Notifying uploaders about a bot that could leave FfD notices on the talk pages of all articles that use the nominated image. I believe there is already a bot like this for when Commons files are nominated for deletion (which bot is this, by the way?), and having one for local files would be beneficial for all the same reasons (more participation at FfD, having a record in the article talk history, and general notification and discussion transparency purposes). — Goszei ( talk) 23:31, 2 July 2021 (UTC)
I believe there is already a bot like this for when Commons files are nominated for deletion (which bot is this, by the way?)That's Community Tech Bot [14] – SD0001 ( talk) 15:47, 5 July 2021 (UTC)
{{
FFDC}}
is non-trivial from a programming perspective. Media files can be displayed/embedded in a variety of ways (e.g. infoboxes, galleries, thumbnails, other templates I'm not thinking of) and adding {{
FFDC}}
as a caption in the correct location for each of these scenarios could be extremely tricky for a bot. To be clear, I don't think this would be a bad thing to have, but I do believe the reward to effort ratio is very low. OTOH, removing {{
FFDC}}
when FfD's are closed is much more straightforward. If there's consensus for it, I can build it. -
FASTILY 21:54, 7 July 2021 (UTC)
|caption=
or do some other tweak to the file's syntax to add the ffdc template. I also have noticed that ffdc template are sometimes removed by editors who don't think the file should be deleted; they seem to misunderstand the purpose of the template and mistake it for a speedy deletion template of sort. I never really considered any possible article talk page spamming effect about might have, but that does seem like valid point now that it's been made. I can see how not only new editors, but even long-term but not very experienced editors (i.e. long-term SPAs) might be find the templates "shocking" in some way. Maybe adding them to a WikiProject talk page would be better as well since the editors of a WikiProject are less likely to be shocked by such template. Even better might be to figure out a way to incorporate
WP:DELSORT into FFD discussion since many WikiProjects already have "alert pages" where their members can find out about things like PRODs, SPEEDYS and XFDs.There's always going to be people unhappy when a file is deleted; so, there's no way around that. Many times, though, these people claim they weren't properly notified at all or not in enough time to do something about it, and there might be some way to help mitigate that. I'm also a little concerned about comments such as
this where some editors nominating files for FFD might be relying too heavily on bots for things like notifications. For example, the wording of {{
FFD}} states as follows: Please consider notifying the uploader by placing {{subst:ffd notice|1=Ffd}}
on their talk page(s)
. That, however, seems a bit inconsistent with the instructions given in the "Give due notice" section at the top of the main FFD page and this might be causing some confusion. I don't use scripts, etc. whenever I start an FFD and do everything manually; this is a bit more time intensive perhaps, but I think it also might lead to less mistakes because you have to check off a mental list of items before the nomination is complete. Those who do rely on scripts or bots to do this type of thing though, might set the bot up to do only the bare minimum that is required; they're not wrong per se, but the automation might cause some things to be left out that probably shouldn't be left out. So, before any new bot is created and starts doing stuff, it might be better to figure out exactly what a human editor should be required to do first. --
Marchjuly (
talk) 22:44, 7 July 2021 (UTC)The bot action would be to check the 'what links here' page of articles that have been deleted by WP:AfD (and are still deleted) and report/list any with links to main space articles. And provide/update a list to the project Wikipedia:WikiProject Red Link Recovery
There should not be any redlinks to articles that have been deleted by the AfD process, C.1. "If the article can be fixed through normal editing, then it is not a candidate for AfD."
People would go through the list and make decisions about how to fix, maybe it should a redirect, maybe the redlinks need to be unlinked, maybe something else...
This is discussed on the project page A bot seems like the best solution.
Because there are several avenues that these might get addressed, it seems like the best solution would something that updates regularly, so corrected subjects fall of the list, and new subjects get added.
Jeepday ( talk) 15:37, 22 June 2021 (UTC)
I frequently come across pages that have dozens of sections, many of them from years ago, and many of them being bot-posted "External links modified" sections. I think very long talk pages, especially when most the content is not very relevant, makes them less usable. Most new users won't know how to setup bot archiving. Would it be reasonable for a bot to automatically setup archiving on pages that meet a certain criteria (length/age related), using a standard configuration with the default age depending on how active the talk page tends to be? ProcrastinatingReader ( talk) 17:16, 11 July 2021 (UTC)
The category page Category:Characters adapted into the Marvel Cinematic Universe has been correctly proposed for speedy deletion as it was previously deleted as a result of a prior discussion.
Can I safely delete that page assuming that a bot will notice and go through all 400+ pages that include this category, and remove it? Or is there an existing bot for which I need to queue up a request? ~ Anachronist ( talk) 20:58, 10 September 2021 (UTC)
I want to create a bot for welcoming new users. King Molala ( talk) 08:41, 8 September 2021 (UTC)
This is a pretty minor task, but throwing it out here, as it'd be very doable via bot. There are many ISBNs on Wikipedia that lack proper hyphenation. https://www.isbn.org/ISBN_converter always knows how to fix them, but it'd be nice to have a bot on-wiki do it instead. Whether or not we'd also want to switch to using a non-breaking hyphen at the same time is something to consider, given that it doesn't seem we'll be able to use a no-wrap to prevent unwanted line breaks. Alternatively, if this is too cosmetic, we could find a way to add it to the WP:GENFIX set. {{u| Sdkb}} talk 21:41, 1 July 2021 (UTC)
In 2019, the ~6,000 articles on bilateral relations were given short descriptions in the format of "Diplomatic relations between the French Republic and the Islamic Republic of Pakistan", with full country names, by Task 4 of User:DannyS712's User:DannyS712 bot ( BRFA here). These are way over the 40-character instruction in WP:SDFORMAT, for little utility in information conveyed. I propose that another task be run where the SD's are all removed, so that an automatic short description like "Bilateral relations" can be added to Template:Infobox bilateral relations. — Goszei ( talk) 23:45, 5 July 2021 (UTC)
Diplomatic relations between the French Republic and the Republic of Iraq(73 characters) at France–Iraq relations is definitely way too wordy. I'm not super keen on "bilateral relations", as many people don't know what "bilateral" means, and there's no indication that what we're talking about is diplomatic relations, not some other type, which is the only way I could really see these titles needing any clarification. Something like
Bilateral diplomatic relations(30 characters) might be good. {{u| Sdkb}} talk 23:23, 12 July 2021 (UTC)
Done Just noting that PrimeBOT took care of this task circa July 24. Thanks to Primefac. — Goszei ( talk) 01:30, 21 August 2021 (UTC)
I've removed this thread, pending review from the Oversight Team. Somewhere like BOTREQ isn't the best place for "here's a huge privacy issue". At this exact point in time I will be suppressing it, but I am also putting this up for review by the OS team, and we will determine whether this thread is acceptable to keep the discussion going, if it should just live in the history, or if it should stay suppressed. Primefac ( talk) 18:06, 25 August 2021 (UTC)
I want to create a bot for welcoming new users. — Preceding unsigned comment added by Tajwar.thesuperman ( talk • contribs) 18:50, 29 August 2021 (UTC)
I pretty frequently come across instances where there are too many line breaks at the end of a section in an article, creating an extra space. This seems like something a bot could pick up fairly easily, although I'm sure there are some exceptions/edge cases that'd throw it off if we're not careful. Would anyone be interested in putting together a bot to address these? I'm sure there are thousands and thousands of them. {{u| Sdkb}} talk 21:35, 13 July 2021 (UTC)
wikicode = wikicode.replace(/\n{3,}/gm, "\n\n");
. If interested, give me a ping and I can make a custom user script that does this. –
Novem Linguae (
talk) 10:30, 14 July 2021 (UTC)The new WP:RPP permanent archive has a missing page for requests filed in October 2013: Wikipedia:Requests for page protection/Archive/2013/10. – LaundryPizza03 ( d c̄) 18:21, 17 July 2021 (UTC)
When a draft is deleted images uploaded to Commons are not always checked and might be left to languish. Even if the images are acceptable they may be uncategorized.
To help with this I would request to have a bot automatically create a list of images in rejected (or deleted, if possible) drafts, with the following conditions:
Optional features:
I am bringing this here after comments in Wikipedia:Village pump (proposals)#Automatic lists of images in rejected or deleted drafts MKFI ( talk) 19:54, 16 June 2021 (UTC)
This message is sent on behalf of the WikiProject Guild of Copy Editors coordinators: Dhtwiki, Miniapolis, Tenryuu, and myself. We screwed up. The Guild of Copy Editors (GOCE) sent out a mass message to our mailing list before it was ready. It has too many errors to fix, so we would like it completely removed. We will edit the message and resend it at a later date.
If there is a friendly bot operator who can revert as many of these mass message additions as possible, we would be grateful. – Jonesey95 ( talk) 20:38, 17 September 2021 (UTC)
When (almost) all of the individual articles listed at
List of settlements in Central Province (Sri Lanka) were created in 2011 (around 1,500 articles), they used the same layout, linking to the Sri Lankan Department of Census and Statistics as an exteral link. The URL for the site has changed, so
https://www.statistics.gov.lk/home.asp
no longer works and should be replaced with
http://www.statistics.gov.lk/
on all articles.
I'm requesting a bot that can replace * [http://www.statistics.gov.lk/home.asp Department of Census and Statistics -Sri Lanka]
with *[http://www.statistics.gov.lk Department of Census and Statistics]
on each article. —
Melofors
T
C 21:42, 11 September 2021 (UTC)
In
Category:NA-Class France articles ( 24 ) there are many pages with hardcoded |class=na
in the {{
WikiProject France}} template. I would like the "na"/"NA" removed. This will allow redirects, templates, and such to file into the appropriate categories, and any oddballs can be sorted out individually after. --
awkwafaba (
📥) 01:35, 18 September 2021 (UTC)
Hello! I discovered that the Shoki Wiki is dead. We use quite a few links from that site for the translated text of Samguk sagi, as a source for Korean kings, like in Jinsa of Baekje. The links have a format of http://nihonshoki.wikidot.com/ss-23 where ss is samguk sagi and 23 is the number of the scroll. Some 60 pages link to various scrolls, and it would be just too much manual work to correct the links with the archive links. Anyone could do this with an archivebot? Thanks a lot. Xia talk to me 18:01, 22 August 2021 (UTC)
Template talk:Infobox film#Request for comments has been closed as consensus to reorder the fields like this:
Before | After |
---|---|
... |director= |producer= |writer= |screenplay= |story= |based_on= |starring= |narrator= |music= |cinematography= |editing= ... |
... |director= |writer= |screenplay= |story= |based_on= |producer= |starring= |narrator= |cinematography= |editing= |music= ... |
(See testcases for rendered examples.)
This means not just does the template need to be changed but in any article where a person notable enough to be linked appears in both |producer=
and any of |writer/screenplay/story/based_on=
, or in both |music=
and |cinematography=
or |editing=
, the linked and unliked occurrences will need to be swapped. So we need a bot to make changes like these:
Article | Before | After |
---|---|---|
Good Night, and Good Luck | | producer = [[Grant Heslov]] | writer = {{unbulleted list|George Clooney|Grant Heslov}} |
| writer = {{unbulleted list|George Clooney|[[Grant Heslov]]}} | producer = Grant Heslov |
The Usual Suspects | | music = [[John Ottman]] | cinematography = [[Newton Thomas Sigel]] | editing = John Ottman |
| cinematography = [[Newton Thomas Sigel]] | editing = [[John Ottman]] | music = John Ottman |
CirrusSearch's regex engine doesn't seem to support back references to capturing groups, so I don't know how many articles need fixing. I don't think we need to simply reorder the parameters in articles that don't require moving links, as that would be purely cosmetic, though I could see an argument either way. The link might not always be a plain wikilink but could be {{ ill}} etc. Also some articles must have refs or footnotes in relevant arguments, which could be a nuisance in figuring out what needs to be done.
To minimize disruption, I plan not to implement the changes to the template until a bot is ready to take on this task. Nardog ( talk) 22:09, 7 July 2021 (UTC)
order of input does not affect order of outputOf course. I was just using the parameters rather than the labels for the sake of explanation. (But as I said above, an argument could be made that reordering the input even if it doesn't change the output would prevent editors from accidentally linking the wrong occurrences of names and thus producing more instances of what we're trying to fix. I guess the sheer number of the transclusions makes a compelling case against making such cosmetic changes, though.)
|producer=
to |writer=
etc. (none involved |music=
). So, though the samples may not be completely representative, we can expect ~6%, or ~8,776, of the articles with the infobox would require moving a link. I can set up a tracking category, or is it better for the bot to go through them one by one?
Nardog (
talk) 11:19, 11 July 2021 (UTC)
|producer(s)=
or |music=
and see if the same phrases appear in |writer=
etc. Admittedly I imagine it'll turn up a lot of false positives, but I haven't got a better idea.
Nardog (
talk) 13:29, 11 July 2021 (UTC)
@ Primefac: So I just went ahead and semi-manually cleaned up the category (I hope it didn't upset your workflow ;)). Excluding existing DUPLINKs from the detection brought down the number from ~2,500 to ~1,500, which made it more manageable.
PrimeBOT's operation last month left hundreds of errors like [[[[Mexico]] City]]
, West [[German language|German]]y
, <ref name="[[Variety (magazine)|Variety]]">
, |name=[[Bruce Lee]]: A Warrior's Journey
, which I fixed as far as I could find. FWIW I just left comments, refs, and non-personal parameters (|name=
, |caption=
, |distributor=
, |released=
, etc.) alone, which was enough to avoid most of these.
Nardog (
talk) 10:42, 19 August 2021 (UTC)
As a former volunteer / current user, I periodically come across articles with years old {{refimprove}}
templates. (e.g.
example). I just remove them if it's obvious reference have been added since the tagging. Seems like a bot could do that, based on whatever criteria the community agrees on.
NE Ent 12:39, 5 July 2021 (UTC)
{{
refimprove}}
(which now redirects to {{
more citations needed}}
)?
GoingBatty (
talk) 14:53, 5 July 2021 (UTC)
If there's no interest in auto removing the tag, perhaps a bot could post a notice on the original posters talk page asking them to review the page? NE Ent 11:40, 14 July 2021 (UTC)
Hello! I have noticed that most of the articles in /info/en/?search=Category:National_Register_of_Historic_Places_in_Virginia link to an old page on the Virginia Department of Historic Resources website ( http://www.dhr.virginia.gov/registers/register_counties_cities.htm) that isn't actually informative (and if it was useful at one point, the archivebot doesn't have it).
Ideally, these links would point directly to the listing's page on the Virginia Landmarks Register website. These pages conveniently use the VLR number in the URL. (For example, for the listing https://www.dhr.virginia.gov/historic-registers/014-0041/, "014-0041" is the VLR number.) The vast majority of these pages also have a NRHP Infobox, which usually includes the VLR number as "designated_other1_number =".
Is there a way for a bot/script to crawl instances of the URL: " http://www.dhr.virginia.gov/registers/register_counties_cities.htm" and change it to " https://www.dhr.virginia.gov/historic-registers/{value of "designated_other1_number" in the page's Infobox}/"?
I've been doing this manually and I just realized that A) there are thousands of these and it's going to take me forever, and B) a robot could probably do this.
Thanks! Niftysquirrel ( talk) 14:27, 5 August 2021 (UTC)
I propose a bot to automatically or semi-automatically parse the various "Sister project" templates across all of the different WMF projects, and synchronize their parameters with Wikidata.
Examples of these templates:
In its most basic form, I think this bot could just parse Wikitext for templates that imply a direct equivalency between two sister project pages, and then add those to their Wikidata entries, with human approval, if they're not already documented. I think this behaviour should be fairly uncontroversial.
In more advanced forms, I think complete two-way synchronization and some degree of semantic behaviour could potentially be useful too. For example, a template on a Wikisource page could be used to automatically populate its Wikidata entry, and that information could then in turn be used to automatically add Template:Wikisource or similar to Wikipedia pages that don't already have it. And to take things even further, you could also E.G. treat links to Wikisource author pages and Wikisource works differently, possibly to the extent of automatically adding "instance of: writer" to Wikidata entries if they're treated as authors in the Wikitext but not currently subclassed as them on Wikidata.
These more advanced forms may require further discussion and consensus. Depending on accuracy, it might be worth keeping a human in the loop for all wiki content changes.
On technical terms, I suggest that the model for parsing these templates into structured data relationships (and the model for vice versa) be kept separate from the code that then applies edits based on those relationships.
Intralexical ( talk) 17:03, 7 August 2021 (UTC)
This is a combination of bot request and request for guidance/assistance on categories/lists/templates.
I created and maintain List of legally mononymous people; see Mononymous person & WP:MONONYM.
I believe it's gotten a bit unwieldy and underinclusive — and that it perpetuates common misconceptions about "legal names". I had initially named it as a contrast to the pre-existing "stage names" and "pseudonyms" lists, which I now believe was a mistake.
I would like to merge it with List of one-word stage names and the mononyms in List of pseudonyms, include the many other mononymous people (e.g. Burmese, royal, pre-modern, etc).
I believe it needs to be converted into something that tracks bio pages more flexibly and automatically, based on a category or template in the bio page itself, rather than through a manual list. I don't know how to do that, which is why I'm requesting help.
I would like the result to differentiate (e.g. by subcategorization or list filter/sort) name-related properties, e.g.:
Ideally, I would like the resulting page to include thumbnail-bio info, e.g. (if applicable):
That part isn't obligatory; e.g. it may not be feasible if the result is a category page.
I believe this means some combination of
I am not familiar with how WP handles such things, so another solution might well be better. Please take the technical parts above just as suggestions. I don't particularly care if e.g. the result is a category page vs a list.
My request is just that it should be automatic based on something on the article page, be easily filtered by type, and have a nicely usable result.
If the infobox lists a full name, title-excluding field with one name, then they're probably mononymous.
Pages with a single name in the WP:NCP#"X of Y" format will usually be mononyms, especially if royalty or pre-modern.
Pages with Template:Singaporean name with parameter 3 only (1 & 2 blank) should indicate a mononym.
Most pages with Template:Burmese name are mononyms, but one would need to check by deleting all honorifics ( wiktionary list) from the person's name. This should use the Burmese text, since ဦး is a title but ဥ is a name, and both are transliterated as "U"; e.g. in U Kyin U (not a mononym), the first U is a title, and the last is a name.
As I suggested above, a bot adding a mononym marker to these pages should do so in a way that's marked as bot-added / tentative. There will of course be false positives and false negatives as always. This is simply a suggestion for doing a bootstrapping pass and extracting the basic info.
I previously asked for help re Burmese names, but got no responses: 1, 2, 3, 4. Volteer1 recently suggested that bots might be an effective approach.
So… any suggestions for how this could best be achieved? Sai ¿? ✍ 17:15, 10 August 2021 (UTC)
When I find bare url's in infobox "website" fields, I always wrap it with the url template (example diff: Special:Diff/1039831301). I do this for two reasons: (1) the infobox width is often unnaturally expanded with long links because bare url's don't wrap, and (2) the template strips the display of http://, which looks better. I considered a possible fix in the code of the infoboxes themselves, but I believe that wouldn't work if two websites were added, or if there is already a url template/other template being used. I believe use of the url template in this field is already fairly widespread and common. — Goszei ( talk) 01:19, 21 August 2021 (UTC)
AFAIK the {{
url}}
template is not supported by the archive bots (please correct if wrong). Thus once the link dies - all links die - it will become link rot requiring manual fix. Which is fine, want to include the risks. No guarantee bots will ever support these templates, there are
thousands of specialized templates it is not feasible to program for each. The more we use them, the more link rot seeps in over time. I suppose if there are so many {{url}}
the sheer magnitude could put pressure on the bot authors to do something, but it would also require modifications to the Lua code probably, consensus on how it works. As it is, bare URLs are supported by most bots and tools. --
Green
C 15:19, 21 August 2021 (UTC)
There is a discussion over at WT:WPT looking for a bot to add/split some WPBS banner templates. I feel like such a bot already exists, so before I put in my own BRFA I figured I'd post here and hopefully find someone who remembers which bot that is (because at the moment I am coming up completely blank). Please reply there to avoid too much decentralisation. Thanks! Primefac ( talk) 14:58, 22 August 2021 (UTC)
Hi folks, not sure if this is a practical request, thought I'd ask anyway. The Science Fiction Encyclopedia has approximately 12,000 entries on people, most of whom are likely notable. As I discovered when writing Sarah LeFanu, at least some of them do not have Wikipedia articles. Is it practical for a bot to trawl through this list, check whether Wikipedia has an article on each entry, and save the entry to a list if it doesn't? Vanamonde ( Talk) 07:33, 23 August 2021 (UTC)
Hey all, hoping a bot (or an AWB master) could be deployed to help in the project space. Many, many WikiProject pages have code at the bottom that begins with:
[[tools:~dispenser/cgi-bin/transcluded_changes.py/ . . .
and of course that link is now bad (gives 404 error on toolserver); but since the pages use tools:
, this is not really a URL change request. An example use is at
WP:WikiProject Geelong#External watchlist.
All instances in the Wikipedia namespace that begin with the above string can be deleted, together with all other characters up until the closing ]]
; there is no retcon that will fix it. Thanks in advance,
UnitedStatesian (
talk) 18:35, 23 August 2021 (UTC)
Referencing this discussion: Wikipedia:Help desk#AiNews.com - Wrongly Indexed
It seems that ainews.com was formerly "Adult Industry News", a news site for the porn industry, which has a lot of citations. The domain now belongs to "Artifical Intelligence News". Needless to say, the new owner doesn't want its domain linked in porn-related articles.
Adult Industry News is now ainews.xxx.
Experimenting with some of the links from https://en.wikipedia.org/?target=*.ainews.com&title=Special%3ALinkSearch it seems that one cannot simply substitute .com with .xxx. The pages must be found on archive.org.
@ GreenC: I am not sure if InternetArchiveBot would handle this unless someone went through all ~130 links and tagged them with {{ dead link}}. I don't know of another bot that comes close. ~ Anachronist ( talk) 15:29, 27 August 2021 (UTC)
I found that some of the islands artical did not create corresponding archipelago variants. Robots can automatically identify and create them.-- q28 ( talk) 01:35, 27 July 2021 (UTC)
Would it be possible to have a bot add {{ reflist talk}} to talk page threads which have refs?(I am not watching this page, so please ping me if you want my attention.) SSSB ( talk) 08:39, 3 September 2021 (UTC)
A helpful message is shown after moving a page:
This section in a nutshell:
|
It seems like a lot of this could be be automated fairly straightforwardly.
It was pointed out on the Teahouse that Wikipedia:Double_redirects do get fixed by a bot, but the fair use rationales, navboxes, etc. also seem unnecessary to fix manually.
Intralexical ( talk) 13:03, 9 August 2021 (UTC)
hello i am currently running a Project, which counts User's Contributions on daily basis, The Project works on several wikis such as ( ckbwiki, SimpleWiki, ksWiki, ArWiki, jawiki) it works by checks User's Contributions and comparing it to previous Contribution, Ranking Top User's Accordingly, if a user is less active than before The Comparison will change to Red otherwise Green, it also Shows User Rights along with their contributions,
i need someone's Help, to make a bot Specially for the Project, cuz i am doing it Manually by myself and it takes so much time and energy, Can anyone help me to make a script for, i really appreciate it. —— 🌸 Sakura emad 💖 ( talk) 19:48, 1 October 2021 (UTC)
Hi Primefac and Sakura emad. Wikipedia:List of Wikipedians by number of edits/Configuration is the bot's source code for this report. -- MZMcBride ( talk) 22:36, 13 October 2021 (UTC)