This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 70 | ← | Archive 73 | Archive 74 | Archive 75 | Archive 76 | Archive 77 | → | Archive 80 |
Could someone please rename the following categories to Category:Paleozoic life of Statename? Abyssal ( talk) 20:43, 3 August 2017 (UTC)
Per Wikipedia:Village_pump_(proposals)/Archive 139#Redirect_talk_pages_with_only_banners, I'm requesting a bot with the following logic
For an example where this is already done, see WT:NJOURNALS. Headbomb { t · c · p · b} 19:39, 3 May 2017 (UTC)
{{
Talk header}}
. All the best:
Rich
Farmbrough, 19:21, 4 May 2017 (UTC).Good evening ladies and gentlemen,
There has been a recent discussion on Help talk:IPA#Converting individual help pages for the various languages into subpages of Help:IPA which I would appreciate if you could go take a look at for the full picture.In summary we have to move a massive number of pages en-masse and thus, can any kind samaritan device out any tool/way to automate the tedious procedure.Cheers! Winged Blades Godric 04:26, 4 August 2017 (UTC)
-pairsfile
parameter. —
JJMC89 (
T·
C) 18:44, 5 August 2017 (UTC)In 2009, the list article List of films in the public domain was (quite correctly) moved to List of films in the public domain in the United States. There are over 200 articles that still link to the redirect; a sampling indicates that it's generally in the "See also" section.
Does there exist a bot that can update to use the correct link?
This is not a case of a redirect where the redirect has a correct but less desirable name than the actual target. This is a case where the redirect name is actually inaccurate and misdescriptive; without qualification, "in the public domain" indicates in the public domain worldwide, not merely in the US. I understand and agree that there's nothing inherently wrong in having a redirect in the See also section; this is a limited case where the redirect title is factually wrong and misleading.
I'm no bot-writer, but I suspect it's not worth coding a bot specifically for this, but if this is a task that an existing bot can do, that would be great. I started in on doing them manually ( [1], [2], [3], [4]) until I realized how many there were. TJRC ( talk) 18:46, 7 August 2017 (UTC)
[List of films in the public domain]
using
wikiget (./wikiget -a "insource:/\[List of films in the public domain\]/"
) then load that list into AWB and do a plain (non-regex) string replace. --
Green
C 15:55, 9 August 2017 (UTC)Hi all, the Swiss Office of Statistcs contacted Wikimedia CH to check the possibility to change around 50'000 links. They have changed systems and, consequently, also the structure of the links. It seems that this modification should be done in several lingustic versions and they can provided a sheet listing the old obsolete link and the new one. Do you think that this activity can be done easily? Do we have to contact several wikipedias or is there a bot able to change in several linguistic versions? -- Ilario ( talk) 09:29, 4 August 2017 (UTC)
Wikimedia CH is the Swiss Chapter of the global Wikimedia movement, and officially recognized as such by the Wikimedia Foundation. Hasteur ( talk) 19:33, 4 August 2017 (UTC)
Would it be possible for a bot to change every instance of the dead link "search.japantimes.co.jp" to "www.japantimes.co.jp" to fix references in Japan-related articles? Thanks.-- Pawnkingthree ( talk) 17:51, 16 August 2017 (UTC)
{{
dead link}}
or |deadurl=yes
or converted to a
https://web.archive.org/web/2012/http://search.japanatimes... so it will be more serious bot work to untangle correctly. Almost wonder if it wouldn't be easier for someone to do it manually, or supervised with a text replace in AWB and manually undo any extraneous dead tags. --
Green
C 19:21, 20 August 2017 (UTC)
Hi there,
I'm looking for a helpful bot who's willing to make a large number of fixes. At the moment, there are many articles directly related to
Afghanistan, where the incorrect {lang-fa} template is listed in the lede, instead of the corrected {lang-prs} template. All the {lang-fa} templates on these articles, i.e. articles about buildings, people (post-19th century), cities, towns, rivers, provinces, mountains, etc. need to be changed to the correct {lang-prs} template. So basically changing/adding 3 letters on every one of these articles.
The official name of the variant of the Persian language spoken in Afghanistan is Dari, and it has its own lang-template. However, until 2015, no such template existed on Wiki, hence people carelessly dropped the lang-fa template on all these articles. All the best, - LouisAragon ( talk) 23:19, 13 August 2017 (UTC)
I don't know how practical this might be, but I thought it would be helpful if redlinks could be tagged as such and a bot could then automatically add the month/year the redlink (or redlink template) was added.
So, for instance, if I create a link to John Simon (author), which is a redlink, one of the following would happen:
I feel this would be extremely helpful for determining how long a redlink has been extant, and would give editors an indication of the likelihood that the redlink might ever become a bluelink.
Never done a bot request before, so apologies if I've horribly mangled this or such. Thanks! DonIago ( talk) 13:30, 22 September 2017 (UTC)
So, as someone new to this process, what are the next steps here? There seems to be a general consensus that it's a good idea to create a bot to track and date redlinks, though I'm not sure there's agreement on the best form that should take. Something would certainly be preferable to nothing, I think. DonIago ( talk) 18:39, 2 October 2017 (UTC)
Would it be possible for a bot to archive each and every references cited on a particular requested WP page? Doing so manually consume a lot of time when there are hundreds of references on a page. -- Saqib ( talk) 15:27, 25 August 2017 (UTC)
Re this conversation, User:InternetArchiveBot does a great job scanning our 5,000,000 articles for deadlinks and fixing them, but it moves very slowly. The FA Coordinators agree that it would be useful to keep Featured material patrolled much more regularly. We could do this by manually dumping a list of article names into the tool, but that's not rigorous and a working 'Featureddeadlink bot' could presumably quite happily also patrol FLs, other FT articles and even GAs. So perhaps the request is a bot that will initially patrol the FAs only, with a view to expanding the remit to other quality material once it's proved itself. That level of detail I can leave to your expertise. -- Dweller ( talk) Become old fashioned! 09:44, 22 August 2017 (UTC)
I have been tagging lots of broken links to the New York Observer, but most of the tags that I added have been removed. Since the Internet Archive Bot is unable to repair these links, is there another way that we can update them? Jarble ( talk) 19:54, 27 August 2017 (UTC)
I have been collecting statistical data on WP:FAC for over a year now; see this thread for details. It would be a big help for certain kinds of reporting if I could convert a historical revision of WP:WBFAN into a simple list of editor/count pairs. Any format of output would be fine; table, comma separated list -- anything predictable. I just need to convert the names with wrapped star lists into names with numbers of stars, along with the date of the revision.
Ideally this would be something I could run at will, but if someone runs this and sends me a file with the results that would work too.
The benefit to Wikipedia is that we are trying to make it easier for first-time nominators to succeed at FAC, but we can't know if we're succeeding without information about who had WBFAN stars and when they got them. Thanks for any help with this. Mike Christie ( talk - contribs - library) 13:37, 9 August 2017 (UTC)
If someone could do that, that would be much appreciated. We've recently added some redirect detection/creation logic to the template, and it would be nice to know which articles are in need of review. Headbomb { t · c · p · b} 19:45, 29 August 2017 (UTC)
@ JJMC89:: The infobox template has been massively updated with automated search functionality. If you could run the bot again, this time only on Category:Articles with missing ISO 4 redirects, that would be super helpful! Headbomb { t · c · p · b} 12:58, 1 September 2017 (UTC)
To help clear up the backlog in Category:Articles with missing ISO 4 redirects, if a bot could
|abbreviation=J. Foo.
Some articles will contain multiple infoboxes.#REDIRECT[[Article containing Infobox journal]] {{R from ISO 4}}
|abbreviation=J. Foo.
Thanks! Headbomb { t · c · p · b} 11:57, 31 August 2017 (UTC)
Wikipedia has hundreds of articles that cite AOL News, but all of the links to this site are now broken. I tried using IABot, but it could not find archived URLs for these references. Is there another bot that can add archive URLs for all of these links? Jarble ( talk) 17:12, 1 September 2017 (UTC)
Jarble, IABot is currently rescuing aolnews.com where it can or leaving a dead link tag. If you see any it missed let me know. Should be done in an hour or so. -- Green C 14:33, 4 September 2017 (UTC)
I have noticed that there are a lot of ISO standards that do not have an article on Wikipedia. Considering the fact that there are a lot of ISO standards (by my estimate, over 21000 of them in English alone, some that have possibly been updated), of which (rough estimate) maybe 10% - 20% have an article, the number of ISO standards could potentially warrant some automated help. Since I couldn't find a concerted effort to document ISO standards in Wikipedia, I thought it'd be useful to debate whether it would be desirable and feasible to use a bot to increase Wikipedia's coverage of the ISO standards.
Should Wikipedia cover ISO standards extensively? Well-known ISO standards like the ISO 9000 and 27000 families obviously meet notability standards, but lesser-known standards might not. In my opinion, considering the fact that the ISO's work constitutes global standards, there is a case to be made, and there is most certainly precedent for jobs like this.
Since I don't have any previous experience with writing Wikipedia bots, I thought I'd chime in here first. Would this be something that would be useful for Wikipedia, and would it be feasible to create valuable articles or article stubs this way? There is information available from the [ website] in a structured form that could go some way towards creating articles, and each standard publishes some metadata about the standard and usually has a description (see for instance 1, 2, 3.
I don't know of any project that is already working towards incorporating information about international standards, or ISO standards specifically, into Wikipedia, nor a bot that works in a related field. If this might be useful, I might very well be interested in writing a bot that either writes stubs or automatic articles on ISO standards, prepares drafts, keeps metadata about ISO standards up-to-date, or something along those lines. I'd gladly hear some feedback. Nietvoordekat ( talk) 11:09, 31 August 2017 (UTC)
A very useful news site in a specialised field ( The Week in Chess) has changed domains at least twice, meaning there are (at a guess) hundreds of refs or external links to change. They would all change in a regular way (i.e. simple string replacement, or at worst regular expression replacement). There has got to be an already existing bot to do this. Can someone point me in the right direction? Adpete ( talk) 12:32, 31 August 2017 (UTC)
It's pretty simple. Every URL beginning with " http://www.chesscenter.com/twic/twic" needs to instead begin with " http://theweekinchess.com/html/twic". Note these are not the complete URLs, but anything after that is unchanged. e.g. at Baadur Jobava, reference 2 needs to change from http://www.chesscenter.com/twic/twic646.html#6 to http://theweekinchess.com/html/twic646.html#6 . I'm happy to run it if given some pointers. But if you want to run it, thanks, that'd be great. I'd be curious to hear how many URLs get changed, if you do.
And to Jonesey95, yes a template could be a good idea too, though enforcing compliance can be difficult, so I'd prefer to do the bot in the first instance. Adpete ( talk) 23:22, 31 August 2017 (UTC)
twic/twicin the URL that need changing. Definitely something a bot would be good for. The other 100ish point to different places. Primefac ( talk) 12:54, 7 September 2017 (UTC)
In conjunction with the discussion raised at this discussion, it will be probably helpful for the community to get an idea about the numbers and keep a track of the articles that are draftified from main-space--in a friendly format. SoWhy has written a SQL query for the purpose.I seek for the development of a bot that will maintain a list of articles which are draftified along with necessary info such as the time of draftification, draftifying editor, article creator, last edit date etc. in a tabular format and that the table will be updated in a regular period of time.Thanks! Winged Blades of Godric On leave 11:49, 27 August 2017 (UTC)
AND log.log_params RLIKE 'noredir";s:1'
). A bot should probably also find pages moved with a redirect where the redirect was later deleted as
WP:R2. Also maybe list prior AFDs or MFDs for the article/draft. Regards
So
Why 12:02, 27 August 2017 (UTC)
@ Winged Blades of Godric, SoWhy, and Thincat: I've drafted an example report below.
Please let me know if you have any comments on it. — JJMC89 ( T· C) 23:11, 3 September 2017 (UTC)
Just in the last few days, I've twice messed up when blocking users: I left the block template and forgot to levy a block. This caused confusion in one circumstance, and in the other, another admin levied a longer block because it looked like the person had already come off an earlier shorter block.
What if we had a bot that would notify admins who added a block template without blocking the user? I'm envisioning the bot finding new substitutions of all block templates, checking to see whether the user really is blocked, and leaving a "you messed up" message (comparable to what BracketBot did) to remind the admin to go back and fix the situation. Sometimes one admin will block and another will leave the message; that's fine, so the bot shouldn't pay attention to who actually levied the block. And bonus points if the bot finds that a non-admin left the template on a non-blocked user's talk page; the bot could leave a note quoting the {{ uw-block}} documentation: Only administrators can block users; adding a block template does not constitute a block. See RFAA to request that a user be blocked. Finally, since actually doing the blocking is quick and simple, we don't need the bot to wait a long time; sometimes you need to compose a long and thoughtful message explaining the block, but you don't need to do that when using Special:Block. Nyttend ( talk) 01:30, 18 August 2017 (UTC)
Articles about deceased U.S. persons often cite the Social Security Death Index, which lies behind a paywall at ancestry.com. An example may be found at George Luz. I have no idea of the total count. The SSDI is also available at familysearch.org for free. The version at Family Search does not display the social security number; the version at ancestry once did but, according to our page no longer does. Converting from ancestry to family search will, I think, require a little human effort and judgment. I don't know if that raises a WP:SYNTHESIS flag. Is it possible to search for uses of the SSDI at ancestry and put them into a list or, preferably, a hidden (Wikipedia:?) category so they can be changed to the Family Search version?-- Georgia Army Vet Contribs Talk 00:53, 5 September 2017 (UTC)
Both params (trans_title
and accessdate
) are deprecated and give an "ugly" warning to the readers. Changing them to trans-title
and access-date
, respectively, eliminate the warning.
MYS
77
✉ 11:43, 10 November 2017 (UTC)
I could do this. -- Magioladitis ( talk) 13:46, 10 November 2017 (UTC)
|accessdate=
is a valid parameter. –
Jonesey95 (
talk) 14:24, 10 November 2017 (UTC){{ archive now}}
I've been manually adding lots of links to references in articles like this one. Does Wikipedia have any bots that can automate this process using Google Scholar or something similar? Jarble ( talk) 21:10, 18 August 2017 (UTC)
I am making this request on behalf of WikiProject Finance and WikiProject Investment. The two projects are merging so there are two things that a bot is needed for:
{{
WikiProject Investment}}
banners on talk pages of articles that were only assessed by the Investment project with the {{
WikiProject Finance}}
banner.It would help immensely! Cheers. WikiEditCrunch ( talk) 17:58, 8 September 2017 (UTC)
The site closes in the near future, it is necessary to save links in the web archive. Is it possible to collect all the links from our articles to this site on one page for the convenience of archiving? Many were included through Template: SportsReference. In general, it would be necessary to archive all the athletes' profiles from there, regardless of whether we have articles. Who has what to offer? It would be good to do this in Wikipedia in all languages. JukoFF ( talk) 13:06, 20 September 2017 (UTC)
Can a bot or tool be coded which has the capability to suggest references for article, or maybe statement? -- Pankaj Jain Capankajsmilyo ( talk · contribs · count) 09:59, 22 September 2017 (UTC)
I am requesting a bot to change code like this:
{{cite web
| title = Games played by Jack Cork in 2014/2015
| url = http://www.soccerbase.com/players/player.sd?player_id=45288&season_id=144
| publisher = Soccerbase
| accessdate = 31 January 2015}}
to this:
{{soccerbase season|45288|2014|accessdate= 31 January 2015}}
which makes the job done faster than doing it manually and it does not introduces errors in later seasons when providing reference to new seasons.
Iggy (
talk) 12:32, 16 September 2017 (UTC)
{{cite news |title=Games played by Wayne Rooney in 2002/2003 |url=http://www.soccerbase.com/players/player.sd?player_id=30921&season_id=132 |publisher=Soccerbase.com |date=6 April 2011 |accessdate=6 April 2011 }}
at
Wayne Rooney, [http://www.soccerbase.com/players/player.sd?player_id=13501&season_id=129 "Games played by Thierry Henry in 1999–2000"]
at
Thierry Henry and
other articles similar to these? --
Kanashimi (
talk) 11:06, 17 September 2017 (UTC)
{{
cite news}}
, {{
cite web}}
and the like by matching the URL. (The only question is whether the formula to go from season year to season ID at
Template:soccerbase season can really be trusted when doing the reverse conversion.) Out-of-template references are of course another matter.
Tigraan
Click here to contact me 15:58, 18 September 2017 (UTC)
{{cite web |title=Richard Cresswell |url=http://www.soccerbase.com/players/player.sd?player_id=8959 |work=Soccerbase |publisher=Centurycomm |accessdate=12 September 2015}}
at
York City F.C.. Are there a better solution? Is using
Template:soccerbase or something this a good idea? (
Template:Soccerbase is not in a citation format still.) --
Kanashimi (
talk) 13:29, 19 September 2017 (UTC)
|name=
parameter, or with the exceptions (season_id=146, mostly), and I didn't realise there had been no communication: sorry about that. Mostly, you left the season_id=146 ones unchanged, which was OK, but another time, it might be worth asking rather than guessing. There's one edit I found,
here, which is a bit of a mess: I've
fixed it manually. Thank you for your work. cheers,
Struway2 (
talk) 09:50, 22 September 2017 (UTC)
to a certain number of articles and found out there are still around 200+ articles to be done. I probably should have mentioned that at the first post on this thread,
Iggy (
talk) 14:25, 22 September 2017 (UTC)
A Wikidata query informs us that there (are at the time of writing) 1,556 people with an article on English Wikipedia, and an ORCID iD in Wikidata, However, Category:Wikipedia articles with ORCID identifiers has only 1,421 embers.
This means that 135 - a considerable percentage - of the people found by the Wikidata query do not have the {{ Authority control}} template at the foot of their article.
The same is no doubt true for other authority control identifiers, such as VIAF.
We need a bot, please, to add the template to those articles, and perhaps more.
If the template is added to an article, and no relevant identifier is found, it does not display - so it can safely be added to all biographical articles (if this might fall foul of COSMETICBOT, then it could be added as part of other tasks, such as general fixes done by AWB.
Can anyone kindly oblige? Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 16:15, 22 September 2017 (UTC)
HelpBOT responds to help citations with advice, and welcomes new editors to Wikipedia. — Preceding unsigned comment added by Lookis ( talk • contribs) 04:03, 11 September 2017 (UTC)
There is a great deal of redundancy between the parent Category:Storyboard artists and the child Category:American storyboard artists. Per WP:SUPERCAT "an article should be categorised as low down in the category hierarchy as possible, without duplication in parent categories above it. In other words, a page or category should rarely be placed in both a category and a subcategory or parent category (supercategory) of that category." Could someone create a bot to remove the redundancy? Thanks! Mtminchi08 ( talk) 08:46, 24 September 2017 (UTC)
The categories under Category:Members of the Parliament of England (pre-1707) by parliament were created before July 2016 when RfC on date ranges was closed. That RfC changed how the MOS:DATERANGE is specified.
Currently the names that contain a date-range are in the format ccyy–yy (unless the century is different) rather than the range style now recommended by MOS:DATERANGE ccyy–ccyy. So I am requesting a bot job to run through all the subcategories and sub-subcategories changing the name of the subcategories and sub-subcategories to ccyy–ccyy and the corresponding category names in articles that are within such categories.
For example the subcategory Category:16th-century English MPs contains a subcategory Category:English MPs 1512–14. To be MOS:DATERANGE compliment it ought to be renamed Category:English MPs 1512–1514.
-- PBS ( talk) 10:42, 23 September 2017 (UTC)
Citations to BBC Genome should be amended thus, as the Genome is merely a front end to scans of The Radio Times. Metadata can be fetched using Citoid (or the Zotero translator for Genome, which Citoid uses). Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 09:49, 28 September 2017 (UTC)
I'd like to be notified by bot every time someone joins the WikiProject JavaScript. Is there a bot that can do this? The Transhumanist 06:02, 29 September 2017 (UTC)
{{ National Heroes of Indonesia}} currently includes a transcluded category, Category:National Heroes of Indonesia. Can someone with AWB run over the pages linked in that template to add the category and then remove the category from the template? (Besides the book link.) -- Izno ( talk) 21:22, 28 September 2017 (UTC)
To reduce lint errors in Lint errors: Misnested tag with different rendering in HTML5 and HTML4, would someone be able to do a bot run that would do the following search and replaces:
Maybe even cyberpower678 might be able to get Cyberbot II to do it? -- WOSlinker ( talk) 16:50, 29 September 2017 (UTC)
Can someone use a bot to remove all of the images from the commented-out list of articles here? Abyssal ( talk) 12:59, 2 October 2017 (UTC)
Collapsed List
|
---|
|
This is a longstanding thing that annoys me, so here's a BOTREQ for it. Unicode subscripts and superscripts#Superscripts and subscripts block contains a list of the affected characters.
The request two distinct tasks:
A) Page moves:
{{DISPLAYTITLE:Foo<sup>2</sup>bar}}
)B) Page cleanup
²
→ <sup>2</sup>
)Headbomb { t · c · p · b} 18:11, 15 August 2017 (UTC)
A case-by-case approach might work best though. For page moved, with superscripts (filtering User/Wikipedia space), we get
Extended content
|
---|
|
I don't see any reason why any of those shouldn't render like we do with Vitamin B6, Golem100, Omega1 Scorpii, 12e Régiment blindé du Canada, Aice5, or Tommy heavenly6 discography. Headbomb { t · c · p · b} 21:33, 15 August 2017 (UTC)
My shared DDNS domain was lost to a domain squatter. I would like the mass removal of links left by DPL bot on User talk pages. In short remove " (check to confirm | fix with Dab solver)" from edits like [8]. — Dispenser 17:38, 29 September 2017 (UTC)
List of pages in namespace 0-15 that contain the string "dispenser.homenet.org":
Collapsed List
|
---|
|
Comment The bot edited my User talk and pointed me to this discussion. Denying cybersquatters is a good cause so I guess the bot's actions are alright. -- Lenticel ( talk) 02:41, 3 October 2017 (UTC)
Comment - I wasn't happy having content removed from my talk archives. Reverted the bot & replaced homenet.org with info.tm which fixed the problem without loss of function. Cabayi ( talk) 09:10, 3 October 2017 (UTC)
Questions - What exactly happened to cause you to lose control of the domain? Is there anything preventing you from seizing it back? If so, what? Whoop whoop pull up Bitching Betty | Averted crashes 18:34, 4 October 2017 (UTC)
I would like to request this following task to point chart references to the correct weeks instead of pointing them to the incorrect page showing the up to date chart. For example, scope="row"{{singlechart|UK|2|artist=Calvin Harris|song=Feel So Close|date=2011-09-03}} points us to the most recent chart but changing it to scope="row"{{singlechart|UK|2|artist=Calvin Harris|song=Feel So Close|date=20110903}}, it directs us to the relevant week as to when the song made it's highest entry when it got there the first time.
Hence the bot will do this: {{singlechart|UK|peak|artist=artistname|song=name of song|date=yyyy-mm-dd}} → {{singlechart|UK|peak|artist=artistname|song=name of song|date=yyyymmdd}}, which removes the dashes in the date parameter. That way every music singles page which has this type of code will then have the correct links on citations. Also, the same problem also exists in the Scottish charts. Iggy ( talk) 16:14, 4 October 2017 (UTC)
{{{date}}}
with {{
digits}}
as appropriate. It will strip anything that's not a digit for those instances. @
Iggy the Swan: I can make the request for you, but exactly which links need to be formatted like this? Please provide full links. Thanks.
Nihlus 01:33, 6 October 2017 (UTC)
Made some edits to the parameters of {{ Infobox television episode}}, code in Template:Infobox television episode/sandbox, test cases in Template:Infobox television episode/testcases. Requesting a bot after no objection at Wikipedia talk:WikiProject Television#Template:Infobox television episode updates. Updates to template have already been performed; current usages of the template will not be affected by this update.
|episode_list = [[Game of Thrones (season 7)|''Game of Thrones'' (season 7)]]<br>[[List of Game of Thrones episodes|List of ''Game of Thrones'' episodes]]
|season_list = Game of Thrones (season 7) |episode_list = List of Game of Thrones episodes
A bot would just need one set of regex to make these changes.
\|\s*episode_list\s*=\s*\[\[([^\|\]]*).*<br[^>]*>\[\[([^\|\]]*).*
| season_list = $1\n| episode_list = $2
Cheers. -- Alex TW 06:26, 3 October 2017 (UTC)
Per WT:CFD#Auto_listing_previous_discussions there is a desire to have previous CFDs listed at discussions for repeat nominations. A bot should be able to do this, by looking at the category talk pages, or looking through old revisions of the category pages. - Evad37 [ talk 06:39, 6 October 2017 (UTC)
I've been drafting a series of lists of Paleozoic life by state and I used the Alabama page as a template to set them the others up. Could someone replace the text "Alabama" with the state named in the title of the following articles? Abyssal ( talk) 16:51, 19 September 2017 (UTC)
{{subst:str right|{{subst:PAGENAME}}|30}}
, and that picked the state name out of the name of each draft. You'll see I had to fix up the pages with disambiguation suffixes. --
John of Reading (
talk) 17:14, 19 September 2017 (UTC)
==A== <!-- Please hide unwanted images in comments like this one so that they may be easily restored later if we change our minds and the image is wanted again --> [[File:PSM V53 D224 The great cretaceus ocean.jpg|thumb|right|Fossil of ''[[Animal]]''.]] [[File:PSM V53 D224 The great cretaceus ocean.jpg|thumb|right|Fossil of ''[[Animal]]''.]] [[File:PSM V53 D224 The great cretaceus ocean.jpg|thumb|right|Fossil of ''[[Animal]]''.]] [[File:PSM V53 D224 The great cretaceus ocean.jpg|thumb|right|Fossil of ''[[Animal]]''.]] [[File:PSM V53 D224 The great cretaceus ocean.jpg|thumb|right|Fossil of ''[[Animal]]''.]] [[File:PSM V53 D224 The great cretaceus ocean.jpg|thumb|right|Fossil of ''[[Animal]]''.]] [[File:PSM V53 D224 The great cretaceus ocean.jpg|thumb|right|Fossil of ''[[Animal]]''.]] [[File:PSM V53 D224 The great cretaceus ocean.jpg|thumb|right|Fossil of ''[[Animal]]''.]] [[File:PSM V53 D224 The great cretaceus ocean.jpg|thumb|right|Fossil of ''[[Animal]]''.]] [[File:PSM V53 D224 The great cretaceus ocean.jpg|thumb|right|Fossil of ''[[Animal]]''.]] [[File:Acteon tornatilis 2.png|thumb|right|A living ''[[Acteon ]]''.]] [[File:Bonefish.png|thumb|right|Illustration of a living ''[[Albula]]'', or bonefish.]] [[File:Ancilla ventricosa 001.jpg|thumb|right|Modern shell of ''[[Ancilla (gastropod)|Ancilla]]''.]] [[File:Appalachiosaurus montgomeriensis.jpg|thumb|right|Life restoration of ''[[Appalachiosaurus ]]''.]] {{Compact ToC}} * †''[[A
@ John of Reading: Hey, John, do you think you could do me a few more favors? Could you run that bot to remove lines of code containing "sp." from the following commented-out list of articles just like you did on September 20th? Then could you scan these articles for the phrases " – o" and " – t" and replace them with " @ o" and " @ t" before removing every "–" from the articles and then replacing the "@"s with the "–" again? Then could you run that operation from September 21st where you replaced the first instance of each capital letter in the format "* †[[A" with a block of code, but with this new smaller block of code listed below:
==A== <!-- Please hide unwanted images in comments like this one so that they may be easily restored later if we change our minds and the image is wanted again --> {{Compact ToC}} * †''[[A
Abyssal ( talk) 14:51, 28 September 2017 (UTC)
Would a bot that turns bare Twitter references into formatted Template:Cite tweet citations be feasible? The four basic parameters of user, number, date, and title should be easily machine-readable, and even though a bot wouldn't be able to interpret the optional parameters, the result would still be better than a bare URL. Madg2011 ( talk) 23:03, 5 August 2017 (UTC)
The discussion is here: Wikipedia:AutoWikiBrowser/Tasks#Comma before Jr. and Sr., and the list of ~1678 pages is here: User:Certes/JrSr/titles. A redirect may or may not exist at the destination page for up to 103, depending on how long it takes the CSD G6 backlog to clear. ~ Tom.Reding ( talk ⋅ dgaf) 22:43, 17 October 2017 (UTC)
Please could someone substitutes the wrong File:Coccarda Italia.svg with the correct File:Coccarda Coppa Italia.svg. See here. Thanks -- Arch Enzo 09:04, 26 October 2017 (UTC)
Can a bot change links from *rane.com/par-*
to *aes.org/par/*
?
Examples:
https://en.wikipedia.org/?title=DBFS&diff=807955578&oldid=785161053
http://www.rane.com/par-d.html#0_dBFS → http://www.aes.org/par/d/#0_dBFS
https://en.wikipedia.org/?title=Boucherot_cell&diff=807957354&oldid=723960052
http://www.rane.com/par-b.html#Boucherot → http://www.aes.org/par/b/#Boucherot
Still 26 transclusions. -- Magioladitis ( talk) 21:18, 12 November 2017 (UTC)
Per recent move request Talk:Doctors (2000 TV series)#Requested move 27 October 2017 that resulted in no consensus, could all the links to Doctors (TV series) be automatically changed to match the page article, so that there is no problem with the redirect from Doctors (TV series) to the disambiguation page? -- wooden superman 15:29, 27 November 2017 (UTC)
This is related to my previous request, but was so different that I thought I'd make a new heading for it. I'm a making a series of ~50 articles listing the prehistoric animals that once inhabited each US state. I was wondering if someone could rig a bot to search the articles linked to in the list for all the images and copy them into the article under the list heading in the format "[[File:Alethopteris PAMuseum.jpg|thumb|right|Fossil of ''[[articletitlegenusname]]''.]]". Draft:List of the Paleozoic life of Alabama is a good example of what I'm going for, I had originally tried to do this manually. Article list hidden in a comment here. Abyssal ( talk) 19:40, 19 September 2017 (UTC)
Collapsed List
|
---|
|
I have do a little trying on Draft:List of the Paleozoic life of Alabama, but I don't know if this is what you want. Please tell me what do you feel and what can I improve the tool, thank you. -- Kanashimi ( talk) 08:17, 22 September 2017 (UTC)
By the way, here is the source code: 20170922.scan link targets in page.js on GitHub -- Kanashimi ( talk) 09:04, 22 September 2017 (UTC)
Please let me know any time when you are ready. -- Kanashimi ( talk) 11:51, 8 October 2017 (UTC)
Hi,
In enwiki, 189 webpages have an old external link to the website http://199.9.2.143. source: Special:LinkSearch search
The issue, with a example: The link http://199.9.2.143/tcdat/tc10/ATL/12L.JULIA/trackfile.txt have a server redirect to HTTPS protocol but fail.
Please, replace http://199.9.2.143/ by https://www.nrlmry.navy.mil/ with a bot. -- Manu1400 ( talk) 06:19, 8 December 2017 (UTC)
Can anyone use a bot to find instances of the following text and replace remove the crosses in the following articles commented out in the section code? Abyssal ( talk) 23:59, 6 December 2017 (UTC)
Acebot has not edited since 25 September, and Ace111 has not fixed the bot. It needs to be replaced or fixed as soon as possible, since {{ NUMBEROF/data}} is used in a number of Wikipedia articles and has not been updated manually. Jc86035 ( talk) 07:55, 26 November 2017 (UTC)
It's been argued that linefeed (LF) characters in the wiki source "create many ... issues" including "both rendering [and] accessibility issues" ( Help talk:Citation Style 1#Pointless whitespace error). Someone's set up the citation templates to throw red error messages that try to force editors to find and remove LFs in the template input. This is extremely undesirable, an abuse of the citation templates to try to arm-twist people into doing technical work they're often not competent to do (the average editor doesn't even know what a linefeed is), and interfering with a basic all-editors responsibility to cite sources.
This is obviously bot work, and since it's fixing legit accessibility and rendering problems, it's not WP:COSMETICBOT. I would suggest
Frankly, it's weird that MediaWiki doesn't already deal with this as part of its routine parsing upon page save. — SMcCandlish ☏ ¢ >ʌⱷ҅ᴥⱷʌ< 21:26, 3 October 2017 (UTC)
Here's an example of where this causes an issue
{{
cite journal}}
: |access-date=
requires |url=
(
help); Cite journal requires |journal=
(
help)I'll admit, I was under the impression that
{{
cite journal}}
: Cite journal requires |journal=
(
help)would equally be broken, but apparently those are not. Headbomb { t · c · p · b} 18:03, 4 October 2017 (UTC)
A vast majority of the
lint errors for double-leading-colons in links (which are no longer rendered as correct links) are from bot-created pages that are subpages of
Wikipedia:Version 1.0 Editorial Team. I wanted to propose a bot that would fix these errors to unclog the lint error list so we can identify other sources of such errors. --
Ahecht (
TALK
PAGE) 15:32, 9 October 2017 (UTC)
To me, it looks like the vast majority of the errors (at least in the first few pages) are in talk page signatures of "::User:RHaworth", which could probably be fixed by a bot or a patient AWB editor. – Jonesey95 ( talk) 14:00, 11 October 2017 (UTC)
We have lots of lists of Wikipedians, accounts who have made the most edits, created the most new articles, deleted the most pages and handed out the most blocks. Why not have a list of Wikipedians who have received the most thanks? Ϣere SpielChequers 13:24, 4 October 2017 (UTC)
Wikipedia:Database reports/Thanks usage -- Edgars2007 ( talk/ contribs) 17:12, 19 October 2017 (UTC)
If someone is editing on mobile, there's a chance the link they wish to cite will be an amp page. Requesting a bot to identify these pages and convert them to the full version. Example: amp full. Terrorist96 ( talk) 18:29, 23 September 2017 (UTC)
Also, when I tried saving my post, I got this warning about the google.com/amp url (so it seems that Wiki already prevents you from posting a google.com/amp/ link, which is why I modified it above):
Your edit was not saved because it contains a new external link to a
site registered on Wikipedia's blacklist.
Blacklisting indicates past problems with the link, so any requests should clearly demonstrate how inclusion would benefit Wikipedia. |
The following link has triggered a protection filter:google.com/amp/ Either that exact link, or a portion of it (typically the root domain name) is currently blocked. Solutions:
Terrorist96 ( talk) 18:36, 23 October 2017 (UTC)
Reposting...
Four of the task forces for WikiProject Caribbean have graduated to full-fledged WikiProjects with their own banner templates, and the task force parameters have been deprecated: {{
WikiProject Cuba}}, {{
WikiProject Grenada}}, {{
WikiProject Haiti}}, and {{
WikiProject Trinidad and Tobago}}. We need a bot to go through the existing transclusions of {{
WikiProject Caribbean}} and perform the following changes:
Please also migrate the task-force importance parameters if they exist, for example |cuba-importance=
. If there isn't a task-force importance parameter, just leave the importance blank. The |class=
, |category=
, |listas=
, and |small=
parameters should be copied from {{
WikiProject Caribbean}} template if they exist.
Kaldari (
talk) 20:39, 23 October 2017 (UTC)
Since this pertains to (semi) automation, I thought you might like a heads up about what I've been working on...
I'm in the process of building scripts for viewing outlines and for outline development. So that other programmers can follow along with how the source code works, I've provided extensive notes on the scripts' talk pages.
So far, there is:
It is my objective to build a set of scripts that fully automate the process of creating outlines. This end goal is a long way off ( AI-complete?). In the meantime, I hope to increase editor productivity as much as I can. Fifty percent automation would double an editor's productivity. I think I could reach 80% automation (a five-fold increase in productivity) within a couple years. Comments and suggestions are welcome.
There's more:
I look forward to your observations, concerns, ideas, and advice. The Transhumanist 08:25, 26 October 2017 (UTC)
Is it possible to add such cleanup tasks to one f the existing bots or create a bot for such cleanups? These fill up the maintenance cat of unknown parameters unnecessarily. Even GA level articles has such issues. -- Pankaj Jain Capankajsmilyo ( talk · contribs · count) 13:39, 14 September 2017 (UTC)
|300px
. We might need to revisit the full image syntax in the infobox once we determine which ones don't require it, or at least convert all of them to not require it.
Nihlus 21:32, 27 October 2017 (UTC)|image=File:Blah.jpg|thumb|250px
in their articles. A bot that scrubbed these categories for those straightforward errors would be helpful. –
Jonesey95 (
talk) 01:53, 28 October 2017 (UTC)This is a huge task and seem nearly impossible to do manually. This will enable in effective template maintenance and easier consolidation. I have been trying to cleanup Category:Pages using infobox Hindu temple with unknown parameters since quite some time and the task seems never ending process. -- Pankaj Jain Capankajsmilyo ( talk · contribs · count) 17:52, 19 September 2017 (UTC)
{{
Infobox Hindu temple}}
? Is there more? —
nihlus kryik (
talk) 21:28, 23 September 2017 (UTC)
|pushpin_map=
with |map_type=
across all the Infoboxes can be a good start. -- Pankaj Jain
Capankajsmilyo (
talk ·
contribs ·
count) 03:11, 28 October 2017 (UTC)I recently found over 100 broken links to academia.edu, but most of them haven't been repaired yet. Would it possible to automatically repair these links, or at least automatically tag them with the {{ dead link}} tag? Jarble ( talk) 02:18, 28 October 2017 (UTC)
Would it be possible/appropriate for a bot to run across the articles in Category:CS1 errors: deprecated parameters, replacing the deprecated parameters with their non-deprecated counterparts? I think this should be quite easy for a bot to do (it's 5 x 1-for-1 substitutions) and would remove the (imo unsightly) "Cite uses deprecated parameter" error messages from about 10,000 articles. DH85868993 ( talk) 03:25, 8 November 2017 (UTC)
BTW, I believe AWB with genfixes on will take care of those. Headbomb { t · c · p · b} 00:37, 9 November 2017 (UTC)
900 articles transclude Template:Infobox City which is a redirect to Template:Infobox settlement [10].
They have been created on three days, between 6 and 9 September 2007, and seem all to be about municipalities of Spain.
The pipe symbols are placed at the end of each line, against current practice. A type parameter is missing. Sometimes the official name includes ", Spain". On some pages coordinates are present at the bottom of the page, which could go into the infobox. A hint to the Template:Infobox_settlement documentation is not present.
Example diff (3 edits) to fix:
Would be nice if a bot could at least fix some of this. 77.179.11.240 ( talk) 03:56, 8 November 2017 (UTC)
Can anyone find instances of the following commented out text in the following articles and replace them with their equivalents from the final list? Abyssal ( talk) 03:44, 17 December 2017 (UTC)
@ Kanashimi: A few months ago I asked you to use your cewbot to scan a list of articles for links to other articles and sort all of the images in the linked articles under the alphabetical headings of the original articles. Could you perform this same operation for the new list of articles I've commented out below? I'd have posted this to your article page but I can't read Chinese. Thanks for all the help you've provided so far. Abyssal ( talk) 15:43, 18 December 2017 (UTC)
Category:Lists of taxa by U.S. state of type locality Abyssal ( talk) 17:04, 12 December 2017 (UTC)
Hi, I'm an admin in Azerbaijani Wikipedia and I've been referred to coders. We were just wondering is it possible to create archivebot and patrol system for Az.Wikipedia? -- Azerifactory ( talk) 23:55, 12 December 2017 (UTC)
New task for NihlusBOT: to convert something like
"[http://www.fifadata.com/document/FWWC/2015/pdf/FWWC_2015_SquadLists.pdf 2015 World Cup] fifadata.com. 2 December 2015. Retrieved 2 December 2017" to
"{{Cite web|url=http://www.fifadata.com/document/FWWC/2015/pdf/FWWC_2015_SquadLists.pdf|title=2015 World Cup|publisher=fifadata.com|date=2 July 2015|accessdate=2 December 2017}}"
where in the square brackets, the space after the url is the title, the first date represents the date parameter and the latter date after 'Retrieved' represents the accessdate.
Iggy (
talk) 19:06, 2 December 2017 (UTC)
It is currently somewhat awkward to contact an administrator on Wikipedia for any time-sensitive tasks (e.g. revdeling something). The current best way to find an administrator who is online now is through IRC, which is a lot of hoops at times. Therefore why don't we create a bot that:
1) Looks at Special:RecentChanges or another source to provide a list of admins who are actively editing.
2) Posts the 3ish admins who are most likely to be active right now on some new wikipedia page.
3) Continually loops through the two above steps every couple of minutes to keep the resource useful.
This also has the side-effect of providing a way to contact a neutral 3rd party admin in a dispute.
I don't think this is terribly hard to code, but I could be very wrong. I don't think that we need anything sophisticated for "most likely to be active" - it's useful with something as blunt as ranking admins by most recent edit/logged action. The only tricky point in my mind is that it is a bot that has to be running 24/7 with a high reliability, because we might be linking to it's page from places like WP:Emergency, where one of the steps is contacting an admin.
Is this a practical useful bot that should be created? Tazerdadog ( talk) 08:24, 13 November 2017 (UTC)
On behalf of Wikipedia:WikiProject New York City, I would like to place a request for a bot to archive all article-space urls for DNAinfo and the Gothamist network. These sites have all suddenly shut down with all of their articles redirecting to the shutdown notice, leaving thousands of dead links across the entire wiki. Here is a list of links:
Basically, a bot should either replace the existing external links with web.archive.org
citations or something similar, or archive the citation like
InternetArchiveBot already does. I can't do it with Archive Bot because it these pages are all redirect pages (301 errors) and technically not 404 errors.
epicgenius (
talk) 00:46, 3 November 2017 (UTC)
|dead-url=
statuses to "no".
epicgenius (
talk) 02:12, 8 November 2017 (UTC)
WP:TREE
requests that most of the transclusions of the {{
Taxonomy/}}
family of templates be
WP:SEMI protected. The entire family contains ~33,700 templates, but the top 3000 templates (~9%), by transclusion count, account for ~96.6% of all transclusions. These templates are hierarchical, so they have a greater potential for disruption than their transclusion count may suggest. For example,
588 transclusions of {{
Taxonomy/Planulozoa}}
were generated by a malicious and/or unknowledgeable editor, and then summarily removed by
Peter coxhead, with only a few edits by both parties. Since there are so many templates, monitoring all of them is not feasible, so some basic level of protection is desired and here requested. Furthermore, changes to these templates are infrequent and almost exclusively performed by experienced editors, so
WP:SEMI seems minimally appropriate for the top 3,000, if not all of these templates.
The resulting list of 2734 permission-free templates is here. ~ Tom.Reding ( talk ⋅ dgaf) 21:30, 12 November 2017 (UTC)
This should be undone. No notice of this proposal was made to the botanists at WP:PLANTS. Our membership does not have template rights and was due to begin a massive overhaul of bryophyte and pteridophyte taxonomy, affecting hundreds of templates. -- EncycloPetey ( talk) 23:59, 12 November 2017 (UTC)
{{
Taxonomy/}}
-expansion efforts are complete, I'm sure that you can post a list of all of the new subtemplates here at BOTREQ, referencing this request, and it will be taken care of. If not, you can ping anyone mentioned in this thread to help get the ball rolling. If you want/need, ping me, regardless, at that time, and I'll do a database scan to make sure no {{
Taxonomy/}}
templates are left out. ~
Tom.Reding (
talk ⋅
dgaf) 02:00, 13 November 2017 (UTC)
A lots of urls need cleanups like this. This is neverending process, hence a bot might be able to do this job better. -- Pankaj Jain Capankajsmilyo ( talk · contribs · count) 03:18, 28 October 2017 (UTC)
utm
-related parameters in strings. I'm not sure if there's one to trim Google strings. --
Izno (
talk) 14:11, 29 October 2017 (UTC)
Don't just blindly remove the fragment identifiers. They can be there to point to the specific content on the page, for instance. — Omegatron ( talk) 01:39, 31 October 2017 (UTC)
The UK Department of Education schools information site Edubase recently closed down and the information was moved to a new gov.uk site. The {{ edubase}} template, which is embedded in {{ Infobox UK school}} has been fixed to point to the new URLs. But there still exists a bunch of citations embedded in the school articles, which point at the old Edubase site and are therefore all {{ dead link}}s now. I haven't been able to figure out to full extent of the issue, but the first 4 I have looked at - South Farnham School, Cobham Free School, The Royal Alexandra and Albert School, All Hallows Catholic School - all have one. I have randomly clicked through the rest of the articles in {{ schools in Surrey}} and quite a few have Edubase references. But some don't. By the looks of things Category:England education navigational boxes by county should be the top level source for which pages to check. I have had a look at writing a bot to make the fixes but it looks a bit of a tall order for a bot novice, despite my being a coder in RL. I think the logic would be something like this:
Anyone fancy doing this? Fob.schools ( talk) 16:32, 29 October 2017 (UTC)
The subsections of 1000 (number) have changed. What I would like to have happen is for
where s is one of
and t is
The article was reorganized. There may be other similar changes in other articles, but let's start with 1000. — Arthur Rubin (talk) 01:36, 6 November 2017 (UTC)
Could a bot run and do something similar to this for the English Wikipedia? The latter version is compatible with Visual Editor. -- Magioladitis ( talk) 22:55, 12 November 2017 (UTC)
Use a bot or semi automated program to find infoboxes that are not updated to include upright factor image sizing support, and make a list to anable editors to make minor edits to update them. This is a simple edit like diff.
For those unaware of the concept, it is basically a way of making images responsive, explained in the picture tutorial as good practice. However defining an image with upright factors requires template support. Which is not yet accross the board.
It would be very helpful if someone could use a bot/quarry/magic thingie to create a to-do list of templates that are not yet fixed, getting a bot to actually do the fix is probably unwise. Dysklyver 12:11, 12 November 2017 (UTC)
I regularly clean up links to bad sources. The interface does not permit linksearching by namespace, and in any case many links mentioned on Talk are being proposed as sources. I would like to suggest:
This would :
Some examples:
Thoughts? Guy ( Help!) 22:06, 16 November 2017 (UTC)
plase make translate bot to translate articles in other wikipedias and not in english wikipedia use google translate — Preceding unsigned comment added by 5.219.141.214 ( talk) 11:38, 14 January 2018 (UTC)
footballdatabase.eu has more articles about football please make bot for adding articles for site — Preceding unsigned comment added by 37.254.182.198 ( talk) 09:06, 14 January 2018 (UTC)
But most of the articles that users create are small articles and it takes a relatively long time to create, so the robot's difference with the users who make small articles is also faster than the robot. — Preceding unsigned comment added by 5.22.35.28 ( talk) 12:42, 15 January 2018 (UTC)
Can someone replace the following code:
[[File:Canis dirus reconstruction.jpg|right|50 px]]<!-- [[Dire wolf]] -->
with
[[File:Canis dirus reconstruction.jpg|thumb|right|Artist's restorations of a ''[[Canis dirus]]'', or dire wolf.]]<!-- [[Dire wolf]] -->
across these articles? Abyssal ( talk) 15:34, 9 January 2018 (UTC)
Perhaps there should be an English version of the Lsjbot thats on Swedish and Cebuano Wikipedias to generate stub articles interlinked in Wikidata for current redlinks for non-controversial topics like airports, locations (ex, comarcas of Spain, municipalities, political subdivisions ets), events (ex, aviation accidents), geology, small cities, military divisions and awards, technologies, plants, animals, medicine, etc...-- PlanespotterA320 ( talk) 02:35, 29 December 2017 (UTC)
Running IABot on an article checks whether an archive exists for each citation, and adds links to existing archives, but if an archive does not exist then it does not create one. Is there a bot (or script) which can be run on an article to create archives for all external links, or could someone who has the skills create one? In view of the constant link rot problems, this would be incredibly useful. Dudley Miles ( talk) 08:52, 12 December 2017 (UTC)
I no longer have the time to maintain my bot's
task 1, and it has been more difficult than I expected, partially due to AWB quirks and a lack of helpful documentation. So, I would like someone else to help substitute the following templates, removing integer parameters in those which are named "Infobox", and removing hidden comments in the first line of the templates. If more than one of these is on a page then all should be substituted in the same edit. The bot does not really need to do anything else since most of the cleanup is handled by the substitution through
Module:Unsubst-infobox (although if you want you can additionally replace |length={{
Duration|m=mm|s=ss}}
with |length=mm:ss
and stuff like that).
Furthermore, pages in these categories should not have the templates substituted due to errors which need to be fixed manually for various reasons. I would have done the substitutions sooner by using User:AnomieBOT/docs/TemplateSubster but it would substitute all of the transclusions, and at a very slow rate (since AnomieBOT does a lot of other things and never runs tasks in parallel). The table nesting errors are probably bot-fixable but I couldn't do it with AWB and I can't write in Python.
I believe these would not count as cosmetic changes, since some of these are per the result of TfD discussions which closed with a consensus to merge, and because pages in these templates with deprecated parameters are automatically included in a tracking category and would be removed from the category upon substitution. About 200,000 pages would be affected. Jc86035 ( talk) 07:33, 23 October 2017 (UTC)
@ Nihlus: No, I don't think so. I don't think I have the time to do it and an experienced pywiki user could do the fixes much faster than I could.
Most of these fixes, for the Module:String errors, would probably involve changing the parameter name from |Last/Next single/album=
to |prev/next_title=
to bypass the
Module:String fixes if there's no date in the parameter and the title doesn't contain any slashes (and removing the italics, bold, quote marks (only if paired at start/end or the title is enclosed in a link – see AWB configuration for how irritating these are)), and wait three to twenty years for the Wikidata-compatible infoboxes to come around for the dates to be added. (Note that |Last single=
and similar can also occasionally be |last_single=
.) There are also
other problems
|
---|
|
There are also other probably-automatable fixes which do not remove pages from the error categories:
other problems
|
---|
|
Naturally, I did not get around to any of these, and none of these are in the AWB configuration. Pretty much all of the AWB configuration is adding <br />
and fixing italics, quote marks, brackets, etc..
This discussion (and some others on Ojorojo's talk page) may help. Jc86035 ( talk) 09:07, 23 October 2017 (UTC)
The page list is probably too long for AWB to do on its own (I think it sets its own limit at 25,000), so I would collate all of the pages transcluding the templates into a text document, remove duplicate lines with BBEdit or another similarly featured text editor (leaving one of each group of duplicates), then do the same for the pages in the error categories, then stick both lists into the same text document and remove duplicate lines (leaving none of the duplicate lines). Jc86035 ( talk) 09:19, 23 October 2017 (UTC)
In my ongoing quest to clean up the music templates, I'm happy to inform everyone that the Singles template is now free of errors and only contains valid template fields (be they old or new). - X201 ( talk) 08:42, 16 November 2017 (UTC)
Nihlus, are you doing or going to do this task (just the substitution, not the other things)? If you aren't it's fine since I might be able to do this myself at some point in the next four months. Jc86035 ( talk) 08:04, 26 November 2017 (UTC)
Code and articles on this page. Abyssal ( talk) 02:58, 20 December 2017 (UTC)
Still really need some help with this. Abyssal ( talk) 04:11, 25 December 2017 (UTC)
Any takers? Abyssal ( talk) 14:26, 2 January 2018 (UTC)
please make bot for adding articles from wikia example nintendo.wikia.com — Preceding unsigned comment added by 5.75.62.30 ( talk) 07:05, 20 January 2018 (UTC)
title says it all — Preceding unsigned comment added by 2601:247:c101:b6c0:599:60b6:ce0b:32cc ( talk)
Would it be possible for a bot to automatically fix errors like Special:Permalink/778228736, where some redirect category templates are placed within {{ Redirect category shell}} but some aren't? feminist ( talk) 10:29, 10 January 2018 (UTC)
please make bot for updayt articles example upbayt sport players stats updayt games and goals and updayt league tables — Preceding unsigned comment added by 5.219.145.98 ( talk) 08:19, 28 January 2018 (UTC)
please make bot for updayt articles example updayt soccer player stats updayt games and goals and updayt soccer tables — Preceding unsigned comment added by 5.22.34.89 ( talk • contribs) 08:05, 30 January 2018 (UTC)
please make geoname bot to adding articles from geoname.org — Preceding unsigned comment added by 37.255.6.103 ( talk • contribs) 10:26, 30 January 2018 (UTC)
please make catalogueoflife bot for adding articles from catalogueoflife.org — Preceding unsigned comment added by 37.255.6.103 ( talk • contribs) 10:43, 30 January 2018 (UTC)
Hello fellow Wikipedians -
The image removal bots (e.g. CommonsDelinker) are doing their jobs to remove deleted images but I have noticed they don't delete the existing captions if there were one. For example, this diff removed a deleted photo but not the existing caption. I was wondering if there could be one which will remove the captions on infoboxes without images or is there already one? Iggy ( talk) 22:04, 8 December 2017 (UTC)
I had been using a link to a pdf article as citation in my a articles on Tamil films.
This has been used in numerous articles for the past one year or so. Now I find that this link URL is blacklisted and a Bot has placed a notification to that effect in many articles. It will be a tiring job to replace the link in individual articles.
Is it possible to do "replace ..... with ...."
The blacklisted link is: https://chasingcinema.files.wordpress.com/2015/09/text.pdf
to be replaced with: https://indiancine.ma/texts/indiancine.ma%3AEncyclopedia_of_Indian_Cinema/text.pdf
Thank you.--UKSharma3 ( User | talk | Contribs) 10:25, 7 January 2018 (UTC)
As per this discussion on my talk page, there are about 433 New York City Subway station articles tagged by WikiProject New York City Public Transportation, the vast majority of which are missing a tag for WikiProject New York City. The list is here. I was wondering if a bot could go around and add {{ WPNYC}} tags to the talk pages that are missing them. epicgenius ( talk) 21:28, 8 January 2018 (UTC)
|importance=
for the pages that are obviously low, mid, high
, etc., and leave the unsure-importance ones blank for later assessment.|class=
, would inheriting {{
WikiProject Trains}}' |class=
be desired/appropriate? ~
Tom.Reding (
talk ⋅
dgaf) 21:57, 8 January 2018 (UTC)
low
for all of these tags, since I don't think any single station is particularly essential to NYC itself.
epicgenius (
talk) 22:27, 8 January 2018 (UTC)
|transportation-importance=
parameter is redundant since the vast majority of the time, it was already defined under the WP:TRAINS template.
epicgenius (
talk) 05:49, 9 January 2018 (UTC)I would like to request that a Bot takes care of all the faulty Daily page view tags on possibly thousands of articles talk pages. They just die and dont work. They need to be replaced to work again. Most likely the same situation on most articles tagged with it at some point. Like here Talk:Oba_Chandler, it just simply dies and turns blank. Ping me for questions or confirmation that a bot takes care of the issue.-- BabbaQ ( talk) 22:20, 26 January 2018 (UTC)
Would it be possible to have a bot fix the CWGC ( Commonwealth War Graves Commission) URLs from the previous format (which no longer works) to the current one? URLs of the form http://www.cwgc.org/search/casualty_details.aspx?Casualty=XXXX should be in the form http://www.cwgc.org/find-war-dead/casualty/XXXX/ (not sure if the trailing / is needed, possibly it is). The same applies to the cemeteries, which were in the form http://www.cwgc.org/search/cemetery_details.aspx?cemetery=XXXX&mode=1 and should instead be in the form http://www.cwgc.org/find-a-cemetery/cemetery/XXXX. The casualty and cemetery ID numbers are not a set number of digits long, they seem to vary between 4 and 7 digits, from what I have seen, but might be less and more digits as well. This has been broken for nearly 2 years now. As of 30 January 2018:
So a total of 839+1641 = 2480 links to fix. @ Pigsonthewing: as we discussed this back then. See also {{ CWGC}} and {{ CWGC cemetery}}. Carcharoth ( talk) 00:14, 30 January 2018 (UTC)
Can someone create a report with the following information in a table based on {{ Connected contributor (paid)}}?
You should skip any pages that aren't in the Talk: namespace. (e.g. general disclosures on user pages, etc). ~ Rob13 Talk 21:38, 26 November 2017 (UTC)
The new form of {{lang-xx|italic}}
without ''
in the second part, has generated errors in thousands of articles. Thus, I suggest to bring back the old form of the template in which the old versions which have the markup for italics are reconsidered as correct !
Mark Mercer (
talk) 17:51, 15 December 2017 (UTC)
Replace the "other_names" param to "nickname", in only mixed martial arts biographies. Can someone create that? Thanks. TBMNY ( talk) 17:49, 15 December 2017 (UTC)
Most articles for Chinese railway lines were recently renamed, and a lot of pages need to be updated. Could someone make a bot to
<noinclude>[[Category:People's Republic of China rail transport succession templates]]</noinclude>
if the page is not already in that category;|system=CR
to |system=CRH
and updating template/page titles and station names (maybe adding |notemid=Part of the [[Name high-speed railway]]
for sub-lines which are part of designated corridor lines); andThere may be other issues with the templates and articles which I haven't addressed. This should affect about 100 templates and 450 articles (a surprisingly small number, given the number of railway stations in China). Consider doing genfixes.
Thanks, Jc86035 ( talk) 17:36, 22 December 2017 (UTC)
There are two similar parameters in Template:Sfn, p and pp. p is for a single page; pp is for a range of pages. Sometimes users (like myself..) put a range of pages for p, or a single page for pp. It seems like a good bot task to run through sfn templates and if it is a range of pages, change the parameter from p to pp (see John Glenn history for a recent example). Another thing that could be tacked on is replacing hyphens and emdashes with endashes for the pp parameter.
It may make more sense to just make p and pp one parameter in the Sfn template, and have the template respond correctly if it is a single page or range o pages. Long story short: there are several solutions to this that can be automated, and it would save some editing time. Kees08 (Talk) 01:16, 4 February 2018 (UTC)
Not sure if there is a proper way to close this, but I do not think it is possible for the rationale stated above. Kees08 (Talk) 18:50, 21 February 2018 (UTC)
hi please creating bot for adding Wikipedia articles and pages to wikidata example Portugal–Thailand relations not item in wikidata but I creating — Preceding unsigned comment added by 37.254.181.81 ( talk) 09:47, 14 February 2018 (UTC)
I made a category called Category:Articles needing their RCDB number moved to Wikidata, which has articles placed in it by having an RCDB number in {{ Infobox roller coaster}} or {{ RCDB}}. I was able to keep up with copying these numbers to the Wikidata entries for a while, but now I'm having trouble. I'd love if someone could make a bot that would be able to help me out. Elisfkc ( talk) 02:25, 13 December 2017 (UTC)
There was a change (relatively) recently to {{
Infobox former country}} in which the symbol
parameter was changed to symbol_type_article
. (See also:
Template talk: Infobox former country#"Symbol" not currently functional.) Other than the parameter's name nothing about it has changed (at least from the user's point of view) so it should just be a straight swap. Since the template is used on >3000 pages this seems like a job best suited to a bot and apparently there is one already which hunts down depreciated parameters. Could this please be added to that bot's tasks (or if not another bot set up to do so). Thanks.
Alphathon /
'æɫ.fə.θɒn/ (
talk) 16:35, 30 October 2017 (UTC)
We seem to have many instances of {{
Cite web}} with |publisher=
set to [http://www.thepeerage.com ThePeerage.com]
or [http://www.thepeerage.com/info.htm ThePeerage.com]
; for example on
Henry de Beaumont.
This needs to be changed to |website=thepeerage.com
. Can anyone oblige, please?
Andy Mabbett (Pigsonthewing);
Talk to Andy;
Andy's edits 18:36, 26 November 2017 (UTC)
*{{cite web|last=Lundy |first=Darryl |date=31 January 2011 |url=http://www.thepeerage.com/p10288.htm#i102873 |title=Henry Beaumont, 1st Earl of Buchan |publisher=[http://www.thepeerage.com ThePeerage.com]}}
into *{{cite web|last=Lundy |first=Darryl |date=31 January 2011 |url=http://www.thepeerage.com/p10288.htm#i102873 |title=Henry Beaumont, 1st Earl of Buchan |website=thepeerage.com}}
? --
Gabrielchihonglee (
talk) 13:10, 15 January 2018 (UTC)
http://www.thepeerage.com/info.htm
.
Andy Mabbett (Pigsonthewing);
Talk to Andy;
Andy's edits 13:16, 16 January 2018 (UTC)
Hi bot developers, I'd like to request a bot to go through all articles using {{ Infobox anatomy}} and subtemplates:
There has been consensus to remove the "Dorlands" parameter - see Template talk:Infobox anatomy and here. "Dorlands" links to a proprietary dictionary that is no longer maintained, and as our articles (even stubs) are of equal or greater length and it is proprietary and no longer maintained, editors have agreed that it should be deleted.
I'd like to ask if a bot could seem through all articles using these infoboxes and remove empty and full parameters of Dorlands, DorlandsPre, DorlandsSuf, DorlandsID
We are discussing if any other parameters should be removed or moved to Wikidata and it's probably I'll be here again in a few weeks to request an update to articles based on that discussion... but for the moment just hoping if we can remove all the Dorlands links. I realise I also will have to update the infobox documentation and the subtemplates and I will get to that soon. -- Tom (LT) ( talk) 01:10, 10 February 2018 (UTC)
thanks Nihlus. We have also moved "FMA", "MeshName", "MeshNumber", "GrayPage" and "GraySubject" to Wikidata, so if your bot if possible could remove those too, that would be appreciated. -- Tom (LT) ( talk) 01:16, 11 February 2018 (UTC)
{{
Infobox medical condition}}
be included in this run as well?
Nihlus 20:04, 12 February 2018 (UTC)
{{
Infobox medical condition}}
is not included. Data export to Wikidata was done only for anatomy related infoboxes listed at the top of this section. So for other templates, keep their data. Because about other template, it is needed different discussion and data export. --
Was a bee (
talk) 22:10, 12 February 2018 (UTC)Thanks Nihlus... I will run through and update the template series, and advise of anything the bot has missed within a week or so. -- Tom (LT) ( talk) 10:02, 13 February 2018 (UTC)
I've had a small discussion with another user saying that when using the measurements for metres and order flipping, e.g. convert|x|m|ftin|order=flip, the ftin is identified as unnecessary. A bot will help to run the task of removing the unnecessary 'ftin|' part of the source which leaves us with 'convert|x|m|ftin|order=flip', where x is the number, in metres, of the height. Iggy ( Swan) 15:39, 3 February 2018 (UTC)
I've had a small discussion with another user saying that when using the measurements for metres and order flipping, e.g. convert|x|m|ftin|order=flip, the ftin is identified as unnecessary for infobox height measurement. A bot will help to run the task of removing the unnecessary 'ftin|' part of the source in the infoboxes which leaves us with 'convert|x|m|order=flip', where x is the number, in metres, of the height of the person in the infobox. Iggy ( Swan) 16:36, 3 February 2018 (UTC)
|ftin=
isn't used. I might be missing something, but I will pass on this one.
Primefac (
talk) 16:52, 3 February 2018 (UTC)
I am trying to sort out the unholy mess that is Adopt-a-User with a view to bringing it back to life in a somewhat modified guise. Most tasks are now done, but I am left with a problem that there are two types of template on the user pages of inactive editors which will regularly need removing. With redundant templates, no-one can see who is genuinely seeking support, nor, indeed, who is genuinely able to offer support to them.
Mine is a related call for assistance to this recent one, but is actually a lot more urgent and difficult to deal with manually.
{{
adoptme}}
template on their page whenever they want to seek assistance under this scheme. I found 109 editors that showed up in
Category:Wikipedians seeking to be adopted in Adopt-a-user. I've since manually stripped out all inactive newcomers, leaving just 18 active ones. But this task will need to be repeated regularly so as to remove it from editors who have not been active for 4 weeks or more.{{
adopting}}
on their userpages. I sampled 52 entries, and conclude only 7% are active, productive editors today. I need to strip out these templates every month from all editors who have been inactive for, say 4 to 6 weeks, plus anyone at all who has total edit counts of less than 500, as they don't meet the criteria for experience.In both cases I would also wish to leave messages on the Talk pages of those two sets of editors to explain the template's removal, and what their options are if the editor resumes activity. I hope this all makes sense, and will be very grateful for any support that can be given. (Note: I will be unavailable to respond to follow-up questions between 3rd and 6th February.) Many thanks. Nick Moyes ( talk) 20:24, 1 February 2018 (UTC)
This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 70 | ← | Archive 73 | Archive 74 | Archive 75 | Archive 76 | Archive 77 | → | Archive 80 |
Could someone please rename the following categories to Category:Paleozoic life of Statename? Abyssal ( talk) 20:43, 3 August 2017 (UTC)
Categories
|
---|
|
Per Wikipedia:Village_pump_(proposals)/Archive 139#Redirect_talk_pages_with_only_banners, I'm requesting a bot with the following logic
For an example where this is already done, see WT:NJOURNALS. Headbomb { t · c · p · b} 19:39, 3 May 2017 (UTC)
{{
Talk header}}
. All the best:
Rich
Farmbrough, 19:21, 4 May 2017 (UTC).Good evening ladies and gentlemen,
There has been a recent discussion on Help talk:IPA#Converting individual help pages for the various languages into subpages of Help:IPA which I would appreciate if you could go take a look at for the full picture.In summary we have to move a massive number of pages en-masse and thus, can any kind samaritan device out any tool/way to automate the tedious procedure.Cheers! Winged Blades Godric 04:26, 4 August 2017 (UTC)
-pairsfile
parameter. —
JJMC89 (
T·
C) 18:44, 5 August 2017 (UTC)In 2009, the list article List of films in the public domain was (quite correctly) moved to List of films in the public domain in the United States. There are over 200 articles that still link to the redirect; a sampling indicates that it's generally in the "See also" section.
Does there exist a bot that can update to use the correct link?
This is not a case of a redirect where the redirect has a correct but less desirable name than the actual target. This is a case where the redirect name is actually inaccurate and misdescriptive; without qualification, "in the public domain" indicates in the public domain worldwide, not merely in the US. I understand and agree that there's nothing inherently wrong in having a redirect in the See also section; this is a limited case where the redirect title is factually wrong and misleading.
I'm no bot-writer, but I suspect it's not worth coding a bot specifically for this, but if this is a task that an existing bot can do, that would be great. I started in on doing them manually ( [1], [2], [3], [4]) until I realized how many there were. TJRC ( talk) 18:46, 7 August 2017 (UTC)
[List of films in the public domain]
using
wikiget (./wikiget -a "insource:/\[List of films in the public domain\]/"
) then load that list into AWB and do a plain (non-regex) string replace. --
Green
C 15:55, 9 August 2017 (UTC)Hi all, the Swiss Office of Statistcs contacted Wikimedia CH to check the possibility to change around 50'000 links. They have changed systems and, consequently, also the structure of the links. It seems that this modification should be done in several lingustic versions and they can provided a sheet listing the old obsolete link and the new one. Do you think that this activity can be done easily? Do we have to contact several wikipedias or is there a bot able to change in several linguistic versions? -- Ilario ( talk) 09:29, 4 August 2017 (UTC)
Wikimedia CH is the Swiss Chapter of the global Wikimedia movement, and officially recognized as such by the Wikimedia Foundation. Hasteur ( talk) 19:33, 4 August 2017 (UTC)
Would it be possible for a bot to change every instance of the dead link "search.japantimes.co.jp" to "www.japantimes.co.jp" to fix references in Japan-related articles? Thanks.-- Pawnkingthree ( talk) 17:51, 16 August 2017 (UTC)
{{
dead link}}
or |deadurl=yes
or converted to a
https://web.archive.org/web/2012/http://search.japanatimes... so it will be more serious bot work to untangle correctly. Almost wonder if it wouldn't be easier for someone to do it manually, or supervised with a text replace in AWB and manually undo any extraneous dead tags. --
Green
C 19:21, 20 August 2017 (UTC)
Hi there,
I'm looking for a helpful bot who's willing to make a large number of fixes. At the moment, there are many articles directly related to
Afghanistan, where the incorrect {lang-fa} template is listed in the lede, instead of the corrected {lang-prs} template. All the {lang-fa} templates on these articles, i.e. articles about buildings, people (post-19th century), cities, towns, rivers, provinces, mountains, etc. need to be changed to the correct {lang-prs} template. So basically changing/adding 3 letters on every one of these articles.
The official name of the variant of the Persian language spoken in Afghanistan is Dari, and it has its own lang-template. However, until 2015, no such template existed on Wiki, hence people carelessly dropped the lang-fa template on all these articles. All the best, - LouisAragon ( talk) 23:19, 13 August 2017 (UTC)
I don't know how practical this might be, but I thought it would be helpful if redlinks could be tagged as such and a bot could then automatically add the month/year the redlink (or redlink template) was added.
So, for instance, if I create a link to John Simon (author), which is a redlink, one of the following would happen:
I feel this would be extremely helpful for determining how long a redlink has been extant, and would give editors an indication of the likelihood that the redlink might ever become a bluelink.
Never done a bot request before, so apologies if I've horribly mangled this or such. Thanks! DonIago ( talk) 13:30, 22 September 2017 (UTC)
So, as someone new to this process, what are the next steps here? There seems to be a general consensus that it's a good idea to create a bot to track and date redlinks, though I'm not sure there's agreement on the best form that should take. Something would certainly be preferable to nothing, I think. DonIago ( talk) 18:39, 2 October 2017 (UTC)
Would it be possible for a bot to archive each and every references cited on a particular requested WP page? Doing so manually consume a lot of time when there are hundreds of references on a page. -- Saqib ( talk) 15:27, 25 August 2017 (UTC)
Re this conversation, User:InternetArchiveBot does a great job scanning our 5,000,000 articles for deadlinks and fixing them, but it moves very slowly. The FA Coordinators agree that it would be useful to keep Featured material patrolled much more regularly. We could do this by manually dumping a list of article names into the tool, but that's not rigorous and a working 'Featureddeadlink bot' could presumably quite happily also patrol FLs, other FT articles and even GAs. So perhaps the request is a bot that will initially patrol the FAs only, with a view to expanding the remit to other quality material once it's proved itself. That level of detail I can leave to your expertise. -- Dweller ( talk) Become old fashioned! 09:44, 22 August 2017 (UTC)
I have been tagging lots of broken links to the New York Observer, but most of the tags that I added have been removed. Since the Internet Archive Bot is unable to repair these links, is there another way that we can update them? Jarble ( talk) 19:54, 27 August 2017 (UTC)
I have been collecting statistical data on WP:FAC for over a year now; see this thread for details. It would be a big help for certain kinds of reporting if I could convert a historical revision of WP:WBFAN into a simple list of editor/count pairs. Any format of output would be fine; table, comma separated list -- anything predictable. I just need to convert the names with wrapped star lists into names with numbers of stars, along with the date of the revision.
Ideally this would be something I could run at will, but if someone runs this and sends me a file with the results that would work too.
The benefit to Wikipedia is that we are trying to make it easier for first-time nominators to succeed at FAC, but we can't know if we're succeeding without information about who had WBFAN stars and when they got them. Thanks for any help with this. Mike Christie ( talk - contribs - library) 13:37, 9 August 2017 (UTC)
If someone could do that, that would be much appreciated. We've recently added some redirect detection/creation logic to the template, and it would be nice to know which articles are in need of review. Headbomb { t · c · p · b} 19:45, 29 August 2017 (UTC)
@ JJMC89:: The infobox template has been massively updated with automated search functionality. If you could run the bot again, this time only on Category:Articles with missing ISO 4 redirects, that would be super helpful! Headbomb { t · c · p · b} 12:58, 1 September 2017 (UTC)
To help clear up the backlog in Category:Articles with missing ISO 4 redirects, if a bot could
|abbreviation=J. Foo.
Some articles will contain multiple infoboxes.#REDIRECT[[Article containing Infobox journal]] {{R from ISO 4}}
|abbreviation=J. Foo.
Thanks! Headbomb { t · c · p · b} 11:57, 31 August 2017 (UTC)
Wikipedia has hundreds of articles that cite AOL News, but all of the links to this site are now broken. I tried using IABot, but it could not find archived URLs for these references. Is there another bot that can add archive URLs for all of these links? Jarble ( talk) 17:12, 1 September 2017 (UTC)
Jarble, IABot is currently rescuing aolnews.com where it can or leaving a dead link tag. If you see any it missed let me know. Should be done in an hour or so. -- Green C 14:33, 4 September 2017 (UTC)
I have noticed that there are a lot of ISO standards that do not have an article on Wikipedia. Considering the fact that there are a lot of ISO standards (by my estimate, over 21000 of them in English alone, some that have possibly been updated), of which (rough estimate) maybe 10% - 20% have an article, the number of ISO standards could potentially warrant some automated help. Since I couldn't find a concerted effort to document ISO standards in Wikipedia, I thought it'd be useful to debate whether it would be desirable and feasible to use a bot to increase Wikipedia's coverage of the ISO standards.
Should Wikipedia cover ISO standards extensively? Well-known ISO standards like the ISO 9000 and 27000 families obviously meet notability standards, but lesser-known standards might not. In my opinion, considering the fact that the ISO's work constitutes global standards, there is a case to be made, and there is most certainly precedent for jobs like this.
Since I don't have any previous experience with writing Wikipedia bots, I thought I'd chime in here first. Would this be something that would be useful for Wikipedia, and would it be feasible to create valuable articles or article stubs this way? There is information available from the [ website] in a structured form that could go some way towards creating articles, and each standard publishes some metadata about the standard and usually has a description (see for instance 1, 2, 3.
I don't know of any project that is already working towards incorporating information about international standards, or ISO standards specifically, into Wikipedia, nor a bot that works in a related field. If this might be useful, I might very well be interested in writing a bot that either writes stubs or automatic articles on ISO standards, prepares drafts, keeps metadata about ISO standards up-to-date, or something along those lines. I'd gladly hear some feedback. Nietvoordekat ( talk) 11:09, 31 August 2017 (UTC)
A very useful news site in a specialised field ( The Week in Chess) has changed domains at least twice, meaning there are (at a guess) hundreds of refs or external links to change. They would all change in a regular way (i.e. simple string replacement, or at worst regular expression replacement). There has got to be an already existing bot to do this. Can someone point me in the right direction? Adpete ( talk) 12:32, 31 August 2017 (UTC)
It's pretty simple. Every URL beginning with " http://www.chesscenter.com/twic/twic" needs to instead begin with " http://theweekinchess.com/html/twic". Note these are not the complete URLs, but anything after that is unchanged. e.g. at Baadur Jobava, reference 2 needs to change from http://www.chesscenter.com/twic/twic646.html#6 to http://theweekinchess.com/html/twic646.html#6 . I'm happy to run it if given some pointers. But if you want to run it, thanks, that'd be great. I'd be curious to hear how many URLs get changed, if you do.
And to Jonesey95, yes a template could be a good idea too, though enforcing compliance can be difficult, so I'd prefer to do the bot in the first instance. Adpete ( talk) 23:22, 31 August 2017 (UTC)
twic/twicin the URL that need changing. Definitely something a bot would be good for. The other 100ish point to different places. Primefac ( talk) 12:54, 7 September 2017 (UTC)
In conjunction with the discussion raised at this discussion, it will be probably helpful for the community to get an idea about the numbers and keep a track of the articles that are draftified from main-space--in a friendly format. SoWhy has written a SQL query for the purpose.I seek for the development of a bot that will maintain a list of articles which are draftified along with necessary info such as the time of draftification, draftifying editor, article creator, last edit date etc. in a tabular format and that the table will be updated in a regular period of time.Thanks! Winged Blades of Godric On leave 11:49, 27 August 2017 (UTC)
AND log.log_params RLIKE 'noredir";s:1'
). A bot should probably also find pages moved with a redirect where the redirect was later deleted as
WP:R2. Also maybe list prior AFDs or MFDs for the article/draft. Regards
So
Why 12:02, 27 August 2017 (UTC)
@ Winged Blades of Godric, SoWhy, and Thincat: I've drafted an example report below.
Example for 2017-09-02 as of 20:46, 4 September 2017 (UTC)
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
Please let me know if you have any comments on it. — JJMC89 ( T· C) 23:11, 3 September 2017 (UTC)
Just in the last few days, I've twice messed up when blocking users: I left the block template and forgot to levy a block. This caused confusion in one circumstance, and in the other, another admin levied a longer block because it looked like the person had already come off an earlier shorter block.
What if we had a bot that would notify admins who added a block template without blocking the user? I'm envisioning the bot finding new substitutions of all block templates, checking to see whether the user really is blocked, and leaving a "you messed up" message (comparable to what BracketBot did) to remind the admin to go back and fix the situation. Sometimes one admin will block and another will leave the message; that's fine, so the bot shouldn't pay attention to who actually levied the block. And bonus points if the bot finds that a non-admin left the template on a non-blocked user's talk page; the bot could leave a note quoting the {{ uw-block}} documentation: Only administrators can block users; adding a block template does not constitute a block. See RFAA to request that a user be blocked. Finally, since actually doing the blocking is quick and simple, we don't need the bot to wait a long time; sometimes you need to compose a long and thoughtful message explaining the block, but you don't need to do that when using Special:Block. Nyttend ( talk) 01:30, 18 August 2017 (UTC)
Articles about deceased U.S. persons often cite the Social Security Death Index, which lies behind a paywall at ancestry.com. An example may be found at George Luz. I have no idea of the total count. The SSDI is also available at familysearch.org for free. The version at Family Search does not display the social security number; the version at ancestry once did but, according to our page no longer does. Converting from ancestry to family search will, I think, require a little human effort and judgment. I don't know if that raises a WP:SYNTHESIS flag. Is it possible to search for uses of the SSDI at ancestry and put them into a list or, preferably, a hidden (Wikipedia:?) category so they can be changed to the Family Search version?-- Georgia Army Vet Contribs Talk 00:53, 5 September 2017 (UTC)
Both params (trans_title
and accessdate
) are deprecated and give an "ugly" warning to the readers. Changing them to trans-title
and access-date
, respectively, eliminate the warning.
MYS
77
✉ 11:43, 10 November 2017 (UTC)
I could do this. -- Magioladitis ( talk) 13:46, 10 November 2017 (UTC)
|accessdate=
is a valid parameter. –
Jonesey95 (
talk) 14:24, 10 November 2017 (UTC){{ archive now}}
I've been manually adding lots of links to references in articles like this one. Does Wikipedia have any bots that can automate this process using Google Scholar or something similar? Jarble ( talk) 21:10, 18 August 2017 (UTC)
I am making this request on behalf of WikiProject Finance and WikiProject Investment. The two projects are merging so there are two things that a bot is needed for:
{{
WikiProject Investment}}
banners on talk pages of articles that were only assessed by the Investment project with the {{
WikiProject Finance}}
banner.It would help immensely! Cheers. WikiEditCrunch ( talk) 17:58, 8 September 2017 (UTC)
The site closes in the near future, it is necessary to save links in the web archive. Is it possible to collect all the links from our articles to this site on one page for the convenience of archiving? Many were included through Template: SportsReference. In general, it would be necessary to archive all the athletes' profiles from there, regardless of whether we have articles. Who has what to offer? It would be good to do this in Wikipedia in all languages. JukoFF ( talk) 13:06, 20 September 2017 (UTC)
Can a bot or tool be coded which has the capability to suggest references for article, or maybe statement? -- Pankaj Jain Capankajsmilyo ( talk · contribs · count) 09:59, 22 September 2017 (UTC)
I am requesting a bot to change code like this:
{{cite web
| title = Games played by Jack Cork in 2014/2015
| url = http://www.soccerbase.com/players/player.sd?player_id=45288&season_id=144
| publisher = Soccerbase
| accessdate = 31 January 2015}}
to this:
{{soccerbase season|45288|2014|accessdate= 31 January 2015}}
which makes the job done faster than doing it manually and it does not introduces errors in later seasons when providing reference to new seasons.
Iggy (
talk) 12:32, 16 September 2017 (UTC)
{{cite news |title=Games played by Wayne Rooney in 2002/2003 |url=http://www.soccerbase.com/players/player.sd?player_id=30921&season_id=132 |publisher=Soccerbase.com |date=6 April 2011 |accessdate=6 April 2011 }}
at
Wayne Rooney, [http://www.soccerbase.com/players/player.sd?player_id=13501&season_id=129 "Games played by Thierry Henry in 1999–2000"]
at
Thierry Henry and
other articles similar to these? --
Kanashimi (
talk) 11:06, 17 September 2017 (UTC)
{{
cite news}}
, {{
cite web}}
and the like by matching the URL. (The only question is whether the formula to go from season year to season ID at
Template:soccerbase season can really be trusted when doing the reverse conversion.) Out-of-template references are of course another matter.
Tigraan
Click here to contact me 15:58, 18 September 2017 (UTC)
{{cite web |title=Richard Cresswell |url=http://www.soccerbase.com/players/player.sd?player_id=8959 |work=Soccerbase |publisher=Centurycomm |accessdate=12 September 2015}}
at
York City F.C.. Are there a better solution? Is using
Template:soccerbase or something this a good idea? (
Template:Soccerbase is not in a citation format still.) --
Kanashimi (
talk) 13:29, 19 September 2017 (UTC)
|name=
parameter, or with the exceptions (season_id=146, mostly), and I didn't realise there had been no communication: sorry about that. Mostly, you left the season_id=146 ones unchanged, which was OK, but another time, it might be worth asking rather than guessing. There's one edit I found,
here, which is a bit of a mess: I've
fixed it manually. Thank you for your work. cheers,
Struway2 (
talk) 09:50, 22 September 2017 (UTC)
to a certain number of articles and found out there are still around 200+ articles to be done. I probably should have mentioned that at the first post on this thread,
Iggy (
talk) 14:25, 22 September 2017 (UTC)
A Wikidata query informs us that there (are at the time of writing) 1,556 people with an article on English Wikipedia, and an ORCID iD in Wikidata, However, Category:Wikipedia articles with ORCID identifiers has only 1,421 embers.
This means that 135 - a considerable percentage - of the people found by the Wikidata query do not have the {{ Authority control}} template at the foot of their article.
The same is no doubt true for other authority control identifiers, such as VIAF.
We need a bot, please, to add the template to those articles, and perhaps more.
If the template is added to an article, and no relevant identifier is found, it does not display - so it can safely be added to all biographical articles (if this might fall foul of COSMETICBOT, then it could be added as part of other tasks, such as general fixes done by AWB.
Can anyone kindly oblige? Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 16:15, 22 September 2017 (UTC)
HelpBOT responds to help citations with advice, and welcomes new editors to Wikipedia. — Preceding unsigned comment added by Lookis ( talk • contribs) 04:03, 11 September 2017 (UTC)
There is a great deal of redundancy between the parent Category:Storyboard artists and the child Category:American storyboard artists. Per WP:SUPERCAT "an article should be categorised as low down in the category hierarchy as possible, without duplication in parent categories above it. In other words, a page or category should rarely be placed in both a category and a subcategory or parent category (supercategory) of that category." Could someone create a bot to remove the redundancy? Thanks! Mtminchi08 ( talk) 08:46, 24 September 2017 (UTC)
The categories under Category:Members of the Parliament of England (pre-1707) by parliament were created before July 2016 when RfC on date ranges was closed. That RfC changed how the MOS:DATERANGE is specified.
Currently the names that contain a date-range are in the format ccyy–yy (unless the century is different) rather than the range style now recommended by MOS:DATERANGE ccyy–ccyy. So I am requesting a bot job to run through all the subcategories and sub-subcategories changing the name of the subcategories and sub-subcategories to ccyy–ccyy and the corresponding category names in articles that are within such categories.
For example the subcategory Category:16th-century English MPs contains a subcategory Category:English MPs 1512–14. To be MOS:DATERANGE compliment it ought to be renamed Category:English MPs 1512–1514.
-- PBS ( talk) 10:42, 23 September 2017 (UTC)
Citations to BBC Genome should be amended thus, as the Genome is merely a front end to scans of The Radio Times. Metadata can be fetched using Citoid (or the Zotero translator for Genome, which Citoid uses). Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 09:49, 28 September 2017 (UTC)
I'd like to be notified by bot every time someone joins the WikiProject JavaScript. Is there a bot that can do this? The Transhumanist 06:02, 29 September 2017 (UTC)
{{ National Heroes of Indonesia}} currently includes a transcluded category, Category:National Heroes of Indonesia. Can someone with AWB run over the pages linked in that template to add the category and then remove the category from the template? (Besides the book link.) -- Izno ( talk) 21:22, 28 September 2017 (UTC)
To reduce lint errors in Lint errors: Misnested tag with different rendering in HTML5 and HTML4, would someone be able to do a bot run that would do the following search and replaces:
Maybe even cyberpower678 might be able to get Cyberbot II to do it? -- WOSlinker ( talk) 16:50, 29 September 2017 (UTC)
Can someone use a bot to remove all of the images from the commented-out list of articles here? Abyssal ( talk) 12:59, 2 October 2017 (UTC)
Collapsed List
|
---|
|
This is a longstanding thing that annoys me, so here's a BOTREQ for it. Unicode subscripts and superscripts#Superscripts and subscripts block contains a list of the affected characters.
The request two distinct tasks:
A) Page moves:
{{DISPLAYTITLE:Foo<sup>2</sup>bar}}
)B) Page cleanup
²
→ <sup>2</sup>
)Headbomb { t · c · p · b} 18:11, 15 August 2017 (UTC)
A case-by-case approach might work best though. For page moved, with superscripts (filtering User/Wikipedia space), we get
Extended content
|
---|
|
I don't see any reason why any of those shouldn't render like we do with Vitamin B6, Golem100, Omega1 Scorpii, 12e Régiment blindé du Canada, Aice5, or Tommy heavenly6 discography. Headbomb { t · c · p · b} 21:33, 15 August 2017 (UTC)
My shared DDNS domain was lost to a domain squatter. I would like the mass removal of links left by DPL bot on User talk pages. In short remove " (check to confirm | fix with Dab solver)" from edits like [8]. — Dispenser 17:38, 29 September 2017 (UTC)
List of pages in namespace 0-15 that contain the string "dispenser.homenet.org":
Collapsed List
|
---|
|
Comment The bot edited my User talk and pointed me to this discussion. Denying cybersquatters is a good cause so I guess the bot's actions are alright. -- Lenticel ( talk) 02:41, 3 October 2017 (UTC)
Comment - I wasn't happy having content removed from my talk archives. Reverted the bot & replaced homenet.org with info.tm which fixed the problem without loss of function. Cabayi ( talk) 09:10, 3 October 2017 (UTC)
Questions - What exactly happened to cause you to lose control of the domain? Is there anything preventing you from seizing it back? If so, what? Whoop whoop pull up Bitching Betty | Averted crashes 18:34, 4 October 2017 (UTC)
I would like to request this following task to point chart references to the correct weeks instead of pointing them to the incorrect page showing the up to date chart. For example, scope="row"{{singlechart|UK|2|artist=Calvin Harris|song=Feel So Close|date=2011-09-03}} points us to the most recent chart but changing it to scope="row"{{singlechart|UK|2|artist=Calvin Harris|song=Feel So Close|date=20110903}}, it directs us to the relevant week as to when the song made it's highest entry when it got there the first time.
Hence the bot will do this: {{singlechart|UK|peak|artist=artistname|song=name of song|date=yyyy-mm-dd}} → {{singlechart|UK|peak|artist=artistname|song=name of song|date=yyyymmdd}}, which removes the dashes in the date parameter. That way every music singles page which has this type of code will then have the correct links on citations. Also, the same problem also exists in the Scottish charts. Iggy ( talk) 16:14, 4 October 2017 (UTC)
{{{date}}}
with {{
digits}}
as appropriate. It will strip anything that's not a digit for those instances. @
Iggy the Swan: I can make the request for you, but exactly which links need to be formatted like this? Please provide full links. Thanks.
Nihlus 01:33, 6 October 2017 (UTC)
Made some edits to the parameters of {{ Infobox television episode}}, code in Template:Infobox television episode/sandbox, test cases in Template:Infobox television episode/testcases. Requesting a bot after no objection at Wikipedia talk:WikiProject Television#Template:Infobox television episode updates. Updates to template have already been performed; current usages of the template will not be affected by this update.
|episode_list = [[Game of Thrones (season 7)|''Game of Thrones'' (season 7)]]<br>[[List of Game of Thrones episodes|List of ''Game of Thrones'' episodes]]
|season_list = Game of Thrones (season 7) |episode_list = List of Game of Thrones episodes
A bot would just need one set of regex to make these changes.
\|\s*episode_list\s*=\s*\[\[([^\|\]]*).*<br[^>]*>\[\[([^\|\]]*).*
| season_list = $1\n| episode_list = $2
Cheers. -- Alex TW 06:26, 3 October 2017 (UTC)
Per WT:CFD#Auto_listing_previous_discussions there is a desire to have previous CFDs listed at discussions for repeat nominations. A bot should be able to do this, by looking at the category talk pages, or looking through old revisions of the category pages. - Evad37 [ talk 06:39, 6 October 2017 (UTC)
I've been drafting a series of lists of Paleozoic life by state and I used the Alabama page as a template to set them the others up. Could someone replace the text "Alabama" with the state named in the title of the following articles? Abyssal ( talk) 16:51, 19 September 2017 (UTC)
{{subst:str right|{{subst:PAGENAME}}|30}}
, and that picked the state name out of the name of each draft. You'll see I had to fix up the pages with disambiguation suffixes. --
John of Reading (
talk) 17:14, 19 September 2017 (UTC)
==A== <!-- Please hide unwanted images in comments like this one so that they may be easily restored later if we change our minds and the image is wanted again --> [[File:PSM V53 D224 The great cretaceus ocean.jpg|thumb|right|Fossil of ''[[Animal]]''.]] [[File:PSM V53 D224 The great cretaceus ocean.jpg|thumb|right|Fossil of ''[[Animal]]''.]] [[File:PSM V53 D224 The great cretaceus ocean.jpg|thumb|right|Fossil of ''[[Animal]]''.]] [[File:PSM V53 D224 The great cretaceus ocean.jpg|thumb|right|Fossil of ''[[Animal]]''.]] [[File:PSM V53 D224 The great cretaceus ocean.jpg|thumb|right|Fossil of ''[[Animal]]''.]] [[File:PSM V53 D224 The great cretaceus ocean.jpg|thumb|right|Fossil of ''[[Animal]]''.]] [[File:PSM V53 D224 The great cretaceus ocean.jpg|thumb|right|Fossil of ''[[Animal]]''.]] [[File:PSM V53 D224 The great cretaceus ocean.jpg|thumb|right|Fossil of ''[[Animal]]''.]] [[File:PSM V53 D224 The great cretaceus ocean.jpg|thumb|right|Fossil of ''[[Animal]]''.]] [[File:PSM V53 D224 The great cretaceus ocean.jpg|thumb|right|Fossil of ''[[Animal]]''.]] [[File:Acteon tornatilis 2.png|thumb|right|A living ''[[Acteon ]]''.]] [[File:Bonefish.png|thumb|right|Illustration of a living ''[[Albula]]'', or bonefish.]] [[File:Ancilla ventricosa 001.jpg|thumb|right|Modern shell of ''[[Ancilla (gastropod)|Ancilla]]''.]] [[File:Appalachiosaurus montgomeriensis.jpg|thumb|right|Life restoration of ''[[Appalachiosaurus ]]''.]] {{Compact ToC}} * †''[[A
@ John of Reading: Hey, John, do you think you could do me a few more favors? Could you run that bot to remove lines of code containing "sp." from the following commented-out list of articles just like you did on September 20th? Then could you scan these articles for the phrases " – o" and " – t" and replace them with " @ o" and " @ t" before removing every "–" from the articles and then replacing the "@"s with the "–" again? Then could you run that operation from September 21st where you replaced the first instance of each capital letter in the format "* †[[A" with a block of code, but with this new smaller block of code listed below:
==A== <!-- Please hide unwanted images in comments like this one so that they may be easily restored later if we change our minds and the image is wanted again --> {{Compact ToC}} * †''[[A
Abyssal ( talk) 14:51, 28 September 2017 (UTC)
Would a bot that turns bare Twitter references into formatted Template:Cite tweet citations be feasible? The four basic parameters of user, number, date, and title should be easily machine-readable, and even though a bot wouldn't be able to interpret the optional parameters, the result would still be better than a bare URL. Madg2011 ( talk) 23:03, 5 August 2017 (UTC)
The discussion is here: Wikipedia:AutoWikiBrowser/Tasks#Comma before Jr. and Sr., and the list of ~1678 pages is here: User:Certes/JrSr/titles. A redirect may or may not exist at the destination page for up to 103, depending on how long it takes the CSD G6 backlog to clear. ~ Tom.Reding ( talk ⋅ dgaf) 22:43, 17 October 2017 (UTC)
Please could someone substitutes the wrong File:Coccarda Italia.svg with the correct File:Coccarda Coppa Italia.svg. See here. Thanks -- Arch Enzo 09:04, 26 October 2017 (UTC)
Can a bot change links from *rane.com/par-*
to *aes.org/par/*
?
Examples:
https://en.wikipedia.org/?title=DBFS&diff=807955578&oldid=785161053
http://www.rane.com/par-d.html#0_dBFS → http://www.aes.org/par/d/#0_dBFS
https://en.wikipedia.org/?title=Boucherot_cell&diff=807957354&oldid=723960052
http://www.rane.com/par-b.html#Boucherot → http://www.aes.org/par/b/#Boucherot
Still 26 transclusions. -- Magioladitis ( talk) 21:18, 12 November 2017 (UTC)
Per recent move request Talk:Doctors (2000 TV series)#Requested move 27 October 2017 that resulted in no consensus, could all the links to Doctors (TV series) be automatically changed to match the page article, so that there is no problem with the redirect from Doctors (TV series) to the disambiguation page? -- wooden superman 15:29, 27 November 2017 (UTC)
This is related to my previous request, but was so different that I thought I'd make a new heading for it. I'm a making a series of ~50 articles listing the prehistoric animals that once inhabited each US state. I was wondering if someone could rig a bot to search the articles linked to in the list for all the images and copy them into the article under the list heading in the format "[[File:Alethopteris PAMuseum.jpg|thumb|right|Fossil of ''[[articletitlegenusname]]''.]]". Draft:List of the Paleozoic life of Alabama is a good example of what I'm going for, I had originally tried to do this manually. Article list hidden in a comment here. Abyssal ( talk) 19:40, 19 September 2017 (UTC)
Collapsed List
|
---|
|
I have do a little trying on Draft:List of the Paleozoic life of Alabama, but I don't know if this is what you want. Please tell me what do you feel and what can I improve the tool, thank you. -- Kanashimi ( talk) 08:17, 22 September 2017 (UTC)
By the way, here is the source code: 20170922.scan link targets in page.js on GitHub -- Kanashimi ( talk) 09:04, 22 September 2017 (UTC)
Please let me know any time when you are ready. -- Kanashimi ( talk) 11:51, 8 October 2017 (UTC)
Hi,
In enwiki, 189 webpages have an old external link to the website http://199.9.2.143. source: Special:LinkSearch search
The issue, with a example: The link http://199.9.2.143/tcdat/tc10/ATL/12L.JULIA/trackfile.txt have a server redirect to HTTPS protocol but fail.
Please, replace http://199.9.2.143/ by https://www.nrlmry.navy.mil/ with a bot. -- Manu1400 ( talk) 06:19, 8 December 2017 (UTC)
Can anyone use a bot to find instances of the following text and replace remove the crosses in the following articles commented out in the section code? Abyssal ( talk) 23:59, 6 December 2017 (UTC)
Acebot has not edited since 25 September, and Ace111 has not fixed the bot. It needs to be replaced or fixed as soon as possible, since {{ NUMBEROF/data}} is used in a number of Wikipedia articles and has not been updated manually. Jc86035 ( talk) 07:55, 26 November 2017 (UTC)
It's been argued that linefeed (LF) characters in the wiki source "create many ... issues" including "both rendering [and] accessibility issues" ( Help talk:Citation Style 1#Pointless whitespace error). Someone's set up the citation templates to throw red error messages that try to force editors to find and remove LFs in the template input. This is extremely undesirable, an abuse of the citation templates to try to arm-twist people into doing technical work they're often not competent to do (the average editor doesn't even know what a linefeed is), and interfering with a basic all-editors responsibility to cite sources.
This is obviously bot work, and since it's fixing legit accessibility and rendering problems, it's not WP:COSMETICBOT. I would suggest
Frankly, it's weird that MediaWiki doesn't already deal with this as part of its routine parsing upon page save. — SMcCandlish ☏ ¢ >ʌⱷ҅ᴥⱷʌ< 21:26, 3 October 2017 (UTC)
Here's an example of where this causes an issue
{{
cite journal}}
: |access-date=
requires |url=
(
help); Cite journal requires |journal=
(
help)I'll admit, I was under the impression that
{{
cite journal}}
: Cite journal requires |journal=
(
help)would equally be broken, but apparently those are not. Headbomb { t · c · p · b} 18:03, 4 October 2017 (UTC)
A vast majority of the
lint errors for double-leading-colons in links (which are no longer rendered as correct links) are from bot-created pages that are subpages of
Wikipedia:Version 1.0 Editorial Team. I wanted to propose a bot that would fix these errors to unclog the lint error list so we can identify other sources of such errors. --
Ahecht (
TALK
PAGE) 15:32, 9 October 2017 (UTC)
To me, it looks like the vast majority of the errors (at least in the first few pages) are in talk page signatures of "::User:RHaworth", which could probably be fixed by a bot or a patient AWB editor. – Jonesey95 ( talk) 14:00, 11 October 2017 (UTC)
We have lots of lists of Wikipedians, accounts who have made the most edits, created the most new articles, deleted the most pages and handed out the most blocks. Why not have a list of Wikipedians who have received the most thanks? Ϣere SpielChequers 13:24, 4 October 2017 (UTC)
Wikipedia:Database reports/Thanks usage -- Edgars2007 ( talk/ contribs) 17:12, 19 October 2017 (UTC)
If someone is editing on mobile, there's a chance the link they wish to cite will be an amp page. Requesting a bot to identify these pages and convert them to the full version. Example: amp full. Terrorist96 ( talk) 18:29, 23 September 2017 (UTC)
Also, when I tried saving my post, I got this warning about the google.com/amp url (so it seems that Wiki already prevents you from posting a google.com/amp/ link, which is why I modified it above):
Your edit was not saved because it contains a new external link to a
site registered on Wikipedia's blacklist.
Blacklisting indicates past problems with the link, so any requests should clearly demonstrate how inclusion would benefit Wikipedia. |
The following link has triggered a protection filter:google.com/amp/ Either that exact link, or a portion of it (typically the root domain name) is currently blocked. Solutions:
Terrorist96 ( talk) 18:36, 23 October 2017 (UTC)
Reposting...
Four of the task forces for WikiProject Caribbean have graduated to full-fledged WikiProjects with their own banner templates, and the task force parameters have been deprecated: {{
WikiProject Cuba}}, {{
WikiProject Grenada}}, {{
WikiProject Haiti}}, and {{
WikiProject Trinidad and Tobago}}. We need a bot to go through the existing transclusions of {{
WikiProject Caribbean}} and perform the following changes:
Please also migrate the task-force importance parameters if they exist, for example |cuba-importance=
. If there isn't a task-force importance parameter, just leave the importance blank. The |class=
, |category=
, |listas=
, and |small=
parameters should be copied from {{
WikiProject Caribbean}} template if they exist.
Kaldari (
talk) 20:39, 23 October 2017 (UTC)
Since this pertains to (semi) automation, I thought you might like a heads up about what I've been working on...
I'm in the process of building scripts for viewing outlines and for outline development. So that other programmers can follow along with how the source code works, I've provided extensive notes on the scripts' talk pages.
So far, there is:
It is my objective to build a set of scripts that fully automate the process of creating outlines. This end goal is a long way off ( AI-complete?). In the meantime, I hope to increase editor productivity as much as I can. Fifty percent automation would double an editor's productivity. I think I could reach 80% automation (a five-fold increase in productivity) within a couple years. Comments and suggestions are welcome.
There's more:
I look forward to your observations, concerns, ideas, and advice. The Transhumanist 08:25, 26 October 2017 (UTC)
Is it possible to add such cleanup tasks to one f the existing bots or create a bot for such cleanups? These fill up the maintenance cat of unknown parameters unnecessarily. Even GA level articles has such issues. -- Pankaj Jain Capankajsmilyo ( talk · contribs · count) 13:39, 14 September 2017 (UTC)
|300px
. We might need to revisit the full image syntax in the infobox once we determine which ones don't require it, or at least convert all of them to not require it.
Nihlus 21:32, 27 October 2017 (UTC)|image=File:Blah.jpg|thumb|250px
in their articles. A bot that scrubbed these categories for those straightforward errors would be helpful. –
Jonesey95 (
talk) 01:53, 28 October 2017 (UTC)This is a huge task and seem nearly impossible to do manually. This will enable in effective template maintenance and easier consolidation. I have been trying to cleanup Category:Pages using infobox Hindu temple with unknown parameters since quite some time and the task seems never ending process. -- Pankaj Jain Capankajsmilyo ( talk · contribs · count) 17:52, 19 September 2017 (UTC)
{{
Infobox Hindu temple}}
? Is there more? —
nihlus kryik (
talk) 21:28, 23 September 2017 (UTC)
|pushpin_map=
with |map_type=
across all the Infoboxes can be a good start. -- Pankaj Jain
Capankajsmilyo (
talk ·
contribs ·
count) 03:11, 28 October 2017 (UTC)I recently found over 100 broken links to academia.edu, but most of them haven't been repaired yet. Would it possible to automatically repair these links, or at least automatically tag them with the {{ dead link}} tag? Jarble ( talk) 02:18, 28 October 2017 (UTC)
Would it be possible/appropriate for a bot to run across the articles in Category:CS1 errors: deprecated parameters, replacing the deprecated parameters with their non-deprecated counterparts? I think this should be quite easy for a bot to do (it's 5 x 1-for-1 substitutions) and would remove the (imo unsightly) "Cite uses deprecated parameter" error messages from about 10,000 articles. DH85868993 ( talk) 03:25, 8 November 2017 (UTC)
BTW, I believe AWB with genfixes on will take care of those. Headbomb { t · c · p · b} 00:37, 9 November 2017 (UTC)
900 articles transclude Template:Infobox City which is a redirect to Template:Infobox settlement [10].
They have been created on three days, between 6 and 9 September 2007, and seem all to be about municipalities of Spain.
The pipe symbols are placed at the end of each line, against current practice. A type parameter is missing. Sometimes the official name includes ", Spain". On some pages coordinates are present at the bottom of the page, which could go into the infobox. A hint to the Template:Infobox_settlement documentation is not present.
Example diff (3 edits) to fix:
Would be nice if a bot could at least fix some of this. 77.179.11.240 ( talk) 03:56, 8 November 2017 (UTC)
Can anyone find instances of the following commented out text in the following articles and replace them with their equivalents from the final list? Abyssal ( talk) 03:44, 17 December 2017 (UTC)
@ Kanashimi: A few months ago I asked you to use your cewbot to scan a list of articles for links to other articles and sort all of the images in the linked articles under the alphabetical headings of the original articles. Could you perform this same operation for the new list of articles I've commented out below? I'd have posted this to your article page but I can't read Chinese. Thanks for all the help you've provided so far. Abyssal ( talk) 15:43, 18 December 2017 (UTC)
Category:Lists of taxa by U.S. state of type locality Abyssal ( talk) 17:04, 12 December 2017 (UTC)
Hi, I'm an admin in Azerbaijani Wikipedia and I've been referred to coders. We were just wondering is it possible to create archivebot and patrol system for Az.Wikipedia? -- Azerifactory ( talk) 23:55, 12 December 2017 (UTC)
New task for NihlusBOT: to convert something like
"[http://www.fifadata.com/document/FWWC/2015/pdf/FWWC_2015_SquadLists.pdf 2015 World Cup] fifadata.com. 2 December 2015. Retrieved 2 December 2017" to
"{{Cite web|url=http://www.fifadata.com/document/FWWC/2015/pdf/FWWC_2015_SquadLists.pdf|title=2015 World Cup|publisher=fifadata.com|date=2 July 2015|accessdate=2 December 2017}}"
where in the square brackets, the space after the url is the title, the first date represents the date parameter and the latter date after 'Retrieved' represents the accessdate.
Iggy (
talk) 19:06, 2 December 2017 (UTC)
It is currently somewhat awkward to contact an administrator on Wikipedia for any time-sensitive tasks (e.g. revdeling something). The current best way to find an administrator who is online now is through IRC, which is a lot of hoops at times. Therefore why don't we create a bot that:
1) Looks at Special:RecentChanges or another source to provide a list of admins who are actively editing.
2) Posts the 3ish admins who are most likely to be active right now on some new wikipedia page.
3) Continually loops through the two above steps every couple of minutes to keep the resource useful.
This also has the side-effect of providing a way to contact a neutral 3rd party admin in a dispute.
I don't think this is terribly hard to code, but I could be very wrong. I don't think that we need anything sophisticated for "most likely to be active" - it's useful with something as blunt as ranking admins by most recent edit/logged action. The only tricky point in my mind is that it is a bot that has to be running 24/7 with a high reliability, because we might be linking to it's page from places like WP:Emergency, where one of the steps is contacting an admin.
Is this a practical useful bot that should be created? Tazerdadog ( talk) 08:24, 13 November 2017 (UTC)
On behalf of Wikipedia:WikiProject New York City, I would like to place a request for a bot to archive all article-space urls for DNAinfo and the Gothamist network. These sites have all suddenly shut down with all of their articles redirecting to the shutdown notice, leaving thousands of dead links across the entire wiki. Here is a list of links:
Basically, a bot should either replace the existing external links with web.archive.org
citations or something similar, or archive the citation like
InternetArchiveBot already does. I can't do it with Archive Bot because it these pages are all redirect pages (301 errors) and technically not 404 errors.
epicgenius (
talk) 00:46, 3 November 2017 (UTC)
|dead-url=
statuses to "no".
epicgenius (
talk) 02:12, 8 November 2017 (UTC)
WP:TREE
requests that most of the transclusions of the {{
Taxonomy/}}
family of templates be
WP:SEMI protected. The entire family contains ~33,700 templates, but the top 3000 templates (~9%), by transclusion count, account for ~96.6% of all transclusions. These templates are hierarchical, so they have a greater potential for disruption than their transclusion count may suggest. For example,
588 transclusions of {{
Taxonomy/Planulozoa}}
were generated by a malicious and/or unknowledgeable editor, and then summarily removed by
Peter coxhead, with only a few edits by both parties. Since there are so many templates, monitoring all of them is not feasible, so some basic level of protection is desired and here requested. Furthermore, changes to these templates are infrequent and almost exclusively performed by experienced editors, so
WP:SEMI seems minimally appropriate for the top 3,000, if not all of these templates.
The resulting list of 2734 permission-free templates is here. ~ Tom.Reding ( talk ⋅ dgaf) 21:30, 12 November 2017 (UTC)
This should be undone. No notice of this proposal was made to the botanists at WP:PLANTS. Our membership does not have template rights and was due to begin a massive overhaul of bryophyte and pteridophyte taxonomy, affecting hundreds of templates. -- EncycloPetey ( talk) 23:59, 12 November 2017 (UTC)
{{
Taxonomy/}}
-expansion efforts are complete, I'm sure that you can post a list of all of the new subtemplates here at BOTREQ, referencing this request, and it will be taken care of. If not, you can ping anyone mentioned in this thread to help get the ball rolling. If you want/need, ping me, regardless, at that time, and I'll do a database scan to make sure no {{
Taxonomy/}}
templates are left out. ~
Tom.Reding (
talk ⋅
dgaf) 02:00, 13 November 2017 (UTC)
A lots of urls need cleanups like this. This is neverending process, hence a bot might be able to do this job better. -- Pankaj Jain Capankajsmilyo ( talk · contribs · count) 03:18, 28 October 2017 (UTC)
utm
-related parameters in strings. I'm not sure if there's one to trim Google strings. --
Izno (
talk) 14:11, 29 October 2017 (UTC)
Don't just blindly remove the fragment identifiers. They can be there to point to the specific content on the page, for instance. — Omegatron ( talk) 01:39, 31 October 2017 (UTC)
The UK Department of Education schools information site Edubase recently closed down and the information was moved to a new gov.uk site. The {{ edubase}} template, which is embedded in {{ Infobox UK school}} has been fixed to point to the new URLs. But there still exists a bunch of citations embedded in the school articles, which point at the old Edubase site and are therefore all {{ dead link}}s now. I haven't been able to figure out to full extent of the issue, but the first 4 I have looked at - South Farnham School, Cobham Free School, The Royal Alexandra and Albert School, All Hallows Catholic School - all have one. I have randomly clicked through the rest of the articles in {{ schools in Surrey}} and quite a few have Edubase references. But some don't. By the looks of things Category:England education navigational boxes by county should be the top level source for which pages to check. I have had a look at writing a bot to make the fixes but it looks a bit of a tall order for a bot novice, despite my being a coder in RL. I think the logic would be something like this:
Anyone fancy doing this? Fob.schools ( talk) 16:32, 29 October 2017 (UTC)
The subsections of 1000 (number) have changed. What I would like to have happen is for
where s is one of
and t is
The article was reorganized. There may be other similar changes in other articles, but let's start with 1000. — Arthur Rubin (talk) 01:36, 6 November 2017 (UTC)
Could a bot run and do something similar to this for the English Wikipedia? The latter version is compatible with Visual Editor. -- Magioladitis ( talk) 22:55, 12 November 2017 (UTC)
Use a bot or semi automated program to find infoboxes that are not updated to include upright factor image sizing support, and make a list to anable editors to make minor edits to update them. This is a simple edit like diff.
For those unaware of the concept, it is basically a way of making images responsive, explained in the picture tutorial as good practice. However defining an image with upright factors requires template support. Which is not yet accross the board.
It would be very helpful if someone could use a bot/quarry/magic thingie to create a to-do list of templates that are not yet fixed, getting a bot to actually do the fix is probably unwise. Dysklyver 12:11, 12 November 2017 (UTC)
I regularly clean up links to bad sources. The interface does not permit linksearching by namespace, and in any case many links mentioned on Talk are being proposed as sources. I would like to suggest:
This would :
Some examples:
Thoughts? Guy ( Help!) 22:06, 16 November 2017 (UTC)
plase make translate bot to translate articles in other wikipedias and not in english wikipedia use google translate — Preceding unsigned comment added by 5.219.141.214 ( talk) 11:38, 14 January 2018 (UTC)
footballdatabase.eu has more articles about football please make bot for adding articles for site — Preceding unsigned comment added by 37.254.182.198 ( talk) 09:06, 14 January 2018 (UTC)
But most of the articles that users create are small articles and it takes a relatively long time to create, so the robot's difference with the users who make small articles is also faster than the robot. — Preceding unsigned comment added by 5.22.35.28 ( talk) 12:42, 15 January 2018 (UTC)
Can someone replace the following code:
[[File:Canis dirus reconstruction.jpg|right|50 px]]<!-- [[Dire wolf]] -->
with
[[File:Canis dirus reconstruction.jpg|thumb|right|Artist's restorations of a ''[[Canis dirus]]'', or dire wolf.]]<!-- [[Dire wolf]] -->
across these articles? Abyssal ( talk) 15:34, 9 January 2018 (UTC)
Perhaps there should be an English version of the Lsjbot thats on Swedish and Cebuano Wikipedias to generate stub articles interlinked in Wikidata for current redlinks for non-controversial topics like airports, locations (ex, comarcas of Spain, municipalities, political subdivisions ets), events (ex, aviation accidents), geology, small cities, military divisions and awards, technologies, plants, animals, medicine, etc...-- PlanespotterA320 ( talk) 02:35, 29 December 2017 (UTC)
Running IABot on an article checks whether an archive exists for each citation, and adds links to existing archives, but if an archive does not exist then it does not create one. Is there a bot (or script) which can be run on an article to create archives for all external links, or could someone who has the skills create one? In view of the constant link rot problems, this would be incredibly useful. Dudley Miles ( talk) 08:52, 12 December 2017 (UTC)
I no longer have the time to maintain my bot's
task 1, and it has been more difficult than I expected, partially due to AWB quirks and a lack of helpful documentation. So, I would like someone else to help substitute the following templates, removing integer parameters in those which are named "Infobox", and removing hidden comments in the first line of the templates. If more than one of these is on a page then all should be substituted in the same edit. The bot does not really need to do anything else since most of the cleanup is handled by the substitution through
Module:Unsubst-infobox (although if you want you can additionally replace |length={{
Duration|m=mm|s=ss}}
with |length=mm:ss
and stuff like that).
Furthermore, pages in these categories should not have the templates substituted due to errors which need to be fixed manually for various reasons. I would have done the substitutions sooner by using User:AnomieBOT/docs/TemplateSubster but it would substitute all of the transclusions, and at a very slow rate (since AnomieBOT does a lot of other things and never runs tasks in parallel). The table nesting errors are probably bot-fixable but I couldn't do it with AWB and I can't write in Python.
I believe these would not count as cosmetic changes, since some of these are per the result of TfD discussions which closed with a consensus to merge, and because pages in these templates with deprecated parameters are automatically included in a tracking category and would be removed from the category upon substitution. About 200,000 pages would be affected. Jc86035 ( talk) 07:33, 23 October 2017 (UTC)
@ Nihlus: No, I don't think so. I don't think I have the time to do it and an experienced pywiki user could do the fixes much faster than I could.
Most of these fixes, for the Module:String errors, would probably involve changing the parameter name from |Last/Next single/album=
to |prev/next_title=
to bypass the
Module:String fixes if there's no date in the parameter and the title doesn't contain any slashes (and removing the italics, bold, quote marks (only if paired at start/end or the title is enclosed in a link – see AWB configuration for how irritating these are)), and wait three to twenty years for the Wikidata-compatible infoboxes to come around for the dates to be added. (Note that |Last single=
and similar can also occasionally be |last_single=
.) There are also
other problems
|
---|
|
There are also other probably-automatable fixes which do not remove pages from the error categories:
other problems
|
---|
|
Naturally, I did not get around to any of these, and none of these are in the AWB configuration. Pretty much all of the AWB configuration is adding <br />
and fixing italics, quote marks, brackets, etc..
This discussion (and some others on Ojorojo's talk page) may help. Jc86035 ( talk) 09:07, 23 October 2017 (UTC)
The page list is probably too long for AWB to do on its own (I think it sets its own limit at 25,000), so I would collate all of the pages transcluding the templates into a text document, remove duplicate lines with BBEdit or another similarly featured text editor (leaving one of each group of duplicates), then do the same for the pages in the error categories, then stick both lists into the same text document and remove duplicate lines (leaving none of the duplicate lines). Jc86035 ( talk) 09:19, 23 October 2017 (UTC)
In my ongoing quest to clean up the music templates, I'm happy to inform everyone that the Singles template is now free of errors and only contains valid template fields (be they old or new). - X201 ( talk) 08:42, 16 November 2017 (UTC)
Nihlus, are you doing or going to do this task (just the substitution, not the other things)? If you aren't it's fine since I might be able to do this myself at some point in the next four months. Jc86035 ( talk) 08:04, 26 November 2017 (UTC)
Code and articles on this page. Abyssal ( talk) 02:58, 20 December 2017 (UTC)
Still really need some help with this. Abyssal ( talk) 04:11, 25 December 2017 (UTC)
Any takers? Abyssal ( talk) 14:26, 2 January 2018 (UTC)
please make bot for adding articles from wikia example nintendo.wikia.com — Preceding unsigned comment added by 5.75.62.30 ( talk) 07:05, 20 January 2018 (UTC)
title says it all — Preceding unsigned comment added by 2601:247:c101:b6c0:599:60b6:ce0b:32cc ( talk)
Would it be possible for a bot to automatically fix errors like Special:Permalink/778228736, where some redirect category templates are placed within {{ Redirect category shell}} but some aren't? feminist ( talk) 10:29, 10 January 2018 (UTC)
please make bot for updayt articles example upbayt sport players stats updayt games and goals and updayt league tables — Preceding unsigned comment added by 5.219.145.98 ( talk) 08:19, 28 January 2018 (UTC)
please make bot for updayt articles example updayt soccer player stats updayt games and goals and updayt soccer tables — Preceding unsigned comment added by 5.22.34.89 ( talk • contribs) 08:05, 30 January 2018 (UTC)
please make geoname bot to adding articles from geoname.org — Preceding unsigned comment added by 37.255.6.103 ( talk • contribs) 10:26, 30 January 2018 (UTC)
please make catalogueoflife bot for adding articles from catalogueoflife.org — Preceding unsigned comment added by 37.255.6.103 ( talk • contribs) 10:43, 30 January 2018 (UTC)
Hello fellow Wikipedians -
The image removal bots (e.g. CommonsDelinker) are doing their jobs to remove deleted images but I have noticed they don't delete the existing captions if there were one. For example, this diff removed a deleted photo but not the existing caption. I was wondering if there could be one which will remove the captions on infoboxes without images or is there already one? Iggy ( talk) 22:04, 8 December 2017 (UTC)
I had been using a link to a pdf article as citation in my a articles on Tamil films.
This has been used in numerous articles for the past one year or so. Now I find that this link URL is blacklisted and a Bot has placed a notification to that effect in many articles. It will be a tiring job to replace the link in individual articles.
Is it possible to do "replace ..... with ...."
The blacklisted link is: https://chasingcinema.files.wordpress.com/2015/09/text.pdf
to be replaced with: https://indiancine.ma/texts/indiancine.ma%3AEncyclopedia_of_Indian_Cinema/text.pdf
Thank you.--UKSharma3 ( User | talk | Contribs) 10:25, 7 January 2018 (UTC)
As per this discussion on my talk page, there are about 433 New York City Subway station articles tagged by WikiProject New York City Public Transportation, the vast majority of which are missing a tag for WikiProject New York City. The list is here. I was wondering if a bot could go around and add {{ WPNYC}} tags to the talk pages that are missing them. epicgenius ( talk) 21:28, 8 January 2018 (UTC)
|importance=
for the pages that are obviously low, mid, high
, etc., and leave the unsure-importance ones blank for later assessment.|class=
, would inheriting {{
WikiProject Trains}}' |class=
be desired/appropriate? ~
Tom.Reding (
talk ⋅
dgaf) 21:57, 8 January 2018 (UTC)
low
for all of these tags, since I don't think any single station is particularly essential to NYC itself.
epicgenius (
talk) 22:27, 8 January 2018 (UTC)
|transportation-importance=
parameter is redundant since the vast majority of the time, it was already defined under the WP:TRAINS template.
epicgenius (
talk) 05:49, 9 January 2018 (UTC)I would like to request that a Bot takes care of all the faulty Daily page view tags on possibly thousands of articles talk pages. They just die and dont work. They need to be replaced to work again. Most likely the same situation on most articles tagged with it at some point. Like here Talk:Oba_Chandler, it just simply dies and turns blank. Ping me for questions or confirmation that a bot takes care of the issue.-- BabbaQ ( talk) 22:20, 26 January 2018 (UTC)
Would it be possible to have a bot fix the CWGC ( Commonwealth War Graves Commission) URLs from the previous format (which no longer works) to the current one? URLs of the form http://www.cwgc.org/search/casualty_details.aspx?Casualty=XXXX should be in the form http://www.cwgc.org/find-war-dead/casualty/XXXX/ (not sure if the trailing / is needed, possibly it is). The same applies to the cemeteries, which were in the form http://www.cwgc.org/search/cemetery_details.aspx?cemetery=XXXX&mode=1 and should instead be in the form http://www.cwgc.org/find-a-cemetery/cemetery/XXXX. The casualty and cemetery ID numbers are not a set number of digits long, they seem to vary between 4 and 7 digits, from what I have seen, but might be less and more digits as well. This has been broken for nearly 2 years now. As of 30 January 2018:
So a total of 839+1641 = 2480 links to fix. @ Pigsonthewing: as we discussed this back then. See also {{ CWGC}} and {{ CWGC cemetery}}. Carcharoth ( talk) 00:14, 30 January 2018 (UTC)
Can someone create a report with the following information in a table based on {{ Connected contributor (paid)}}?
You should skip any pages that aren't in the Talk: namespace. (e.g. general disclosures on user pages, etc). ~ Rob13 Talk 21:38, 26 November 2017 (UTC)
The new form of {{lang-xx|italic}}
without ''
in the second part, has generated errors in thousands of articles. Thus, I suggest to bring back the old form of the template in which the old versions which have the markup for italics are reconsidered as correct !
Mark Mercer (
talk) 17:51, 15 December 2017 (UTC)
Replace the "other_names" param to "nickname", in only mixed martial arts biographies. Can someone create that? Thanks. TBMNY ( talk) 17:49, 15 December 2017 (UTC)
Most articles for Chinese railway lines were recently renamed, and a lot of pages need to be updated. Could someone make a bot to
<noinclude>[[Category:People's Republic of China rail transport succession templates]]</noinclude>
if the page is not already in that category;|system=CR
to |system=CRH
and updating template/page titles and station names (maybe adding |notemid=Part of the [[Name high-speed railway]]
for sub-lines which are part of designated corridor lines); andThere may be other issues with the templates and articles which I haven't addressed. This should affect about 100 templates and 450 articles (a surprisingly small number, given the number of railway stations in China). Consider doing genfixes.
Thanks, Jc86035 ( talk) 17:36, 22 December 2017 (UTC)
There are two similar parameters in Template:Sfn, p and pp. p is for a single page; pp is for a range of pages. Sometimes users (like myself..) put a range of pages for p, or a single page for pp. It seems like a good bot task to run through sfn templates and if it is a range of pages, change the parameter from p to pp (see John Glenn history for a recent example). Another thing that could be tacked on is replacing hyphens and emdashes with endashes for the pp parameter.
It may make more sense to just make p and pp one parameter in the Sfn template, and have the template respond correctly if it is a single page or range o pages. Long story short: there are several solutions to this that can be automated, and it would save some editing time. Kees08 (Talk) 01:16, 4 February 2018 (UTC)
Not sure if there is a proper way to close this, but I do not think it is possible for the rationale stated above. Kees08 (Talk) 18:50, 21 February 2018 (UTC)
hi please creating bot for adding Wikipedia articles and pages to wikidata example Portugal–Thailand relations not item in wikidata but I creating — Preceding unsigned comment added by 37.254.181.81 ( talk) 09:47, 14 February 2018 (UTC)
I made a category called Category:Articles needing their RCDB number moved to Wikidata, which has articles placed in it by having an RCDB number in {{ Infobox roller coaster}} or {{ RCDB}}. I was able to keep up with copying these numbers to the Wikidata entries for a while, but now I'm having trouble. I'd love if someone could make a bot that would be able to help me out. Elisfkc ( talk) 02:25, 13 December 2017 (UTC)
There was a change (relatively) recently to {{
Infobox former country}} in which the symbol
parameter was changed to symbol_type_article
. (See also:
Template talk: Infobox former country#"Symbol" not currently functional.) Other than the parameter's name nothing about it has changed (at least from the user's point of view) so it should just be a straight swap. Since the template is used on >3000 pages this seems like a job best suited to a bot and apparently there is one already which hunts down depreciated parameters. Could this please be added to that bot's tasks (or if not another bot set up to do so). Thanks.
Alphathon /
'æɫ.fə.θɒn/ (
talk) 16:35, 30 October 2017 (UTC)
We seem to have many instances of {{
Cite web}} with |publisher=
set to [http://www.thepeerage.com ThePeerage.com]
or [http://www.thepeerage.com/info.htm ThePeerage.com]
; for example on
Henry de Beaumont.
This needs to be changed to |website=thepeerage.com
. Can anyone oblige, please?
Andy Mabbett (Pigsonthewing);
Talk to Andy;
Andy's edits 18:36, 26 November 2017 (UTC)
*{{cite web|last=Lundy |first=Darryl |date=31 January 2011 |url=http://www.thepeerage.com/p10288.htm#i102873 |title=Henry Beaumont, 1st Earl of Buchan |publisher=[http://www.thepeerage.com ThePeerage.com]}}
into *{{cite web|last=Lundy |first=Darryl |date=31 January 2011 |url=http://www.thepeerage.com/p10288.htm#i102873 |title=Henry Beaumont, 1st Earl of Buchan |website=thepeerage.com}}
? --
Gabrielchihonglee (
talk) 13:10, 15 January 2018 (UTC)
http://www.thepeerage.com/info.htm
.
Andy Mabbett (Pigsonthewing);
Talk to Andy;
Andy's edits 13:16, 16 January 2018 (UTC)
Hi bot developers, I'd like to request a bot to go through all articles using {{ Infobox anatomy}} and subtemplates:
There has been consensus to remove the "Dorlands" parameter - see Template talk:Infobox anatomy and here. "Dorlands" links to a proprietary dictionary that is no longer maintained, and as our articles (even stubs) are of equal or greater length and it is proprietary and no longer maintained, editors have agreed that it should be deleted.
I'd like to ask if a bot could seem through all articles using these infoboxes and remove empty and full parameters of Dorlands, DorlandsPre, DorlandsSuf, DorlandsID
We are discussing if any other parameters should be removed or moved to Wikidata and it's probably I'll be here again in a few weeks to request an update to articles based on that discussion... but for the moment just hoping if we can remove all the Dorlands links. I realise I also will have to update the infobox documentation and the subtemplates and I will get to that soon. -- Tom (LT) ( talk) 01:10, 10 February 2018 (UTC)
thanks Nihlus. We have also moved "FMA", "MeshName", "MeshNumber", "GrayPage" and "GraySubject" to Wikidata, so if your bot if possible could remove those too, that would be appreciated. -- Tom (LT) ( talk) 01:16, 11 February 2018 (UTC)
{{
Infobox medical condition}}
be included in this run as well?
Nihlus 20:04, 12 February 2018 (UTC)
{{
Infobox medical condition}}
is not included. Data export to Wikidata was done only for anatomy related infoboxes listed at the top of this section. So for other templates, keep their data. Because about other template, it is needed different discussion and data export. --
Was a bee (
talk) 22:10, 12 February 2018 (UTC)Thanks Nihlus... I will run through and update the template series, and advise of anything the bot has missed within a week or so. -- Tom (LT) ( talk) 10:02, 13 February 2018 (UTC)
I've had a small discussion with another user saying that when using the measurements for metres and order flipping, e.g. convert|x|m|ftin|order=flip, the ftin is identified as unnecessary. A bot will help to run the task of removing the unnecessary 'ftin|' part of the source which leaves us with 'convert|x|m|ftin|order=flip', where x is the number, in metres, of the height. Iggy ( Swan) 15:39, 3 February 2018 (UTC)
I've had a small discussion with another user saying that when using the measurements for metres and order flipping, e.g. convert|x|m|ftin|order=flip, the ftin is identified as unnecessary for infobox height measurement. A bot will help to run the task of removing the unnecessary 'ftin|' part of the source in the infoboxes which leaves us with 'convert|x|m|order=flip', where x is the number, in metres, of the height of the person in the infobox. Iggy ( Swan) 16:36, 3 February 2018 (UTC)
|ftin=
isn't used. I might be missing something, but I will pass on this one.
Primefac (
talk) 16:52, 3 February 2018 (UTC)
I am trying to sort out the unholy mess that is Adopt-a-User with a view to bringing it back to life in a somewhat modified guise. Most tasks are now done, but I am left with a problem that there are two types of template on the user pages of inactive editors which will regularly need removing. With redundant templates, no-one can see who is genuinely seeking support, nor, indeed, who is genuinely able to offer support to them.
Mine is a related call for assistance to this recent one, but is actually a lot more urgent and difficult to deal with manually.
{{
adoptme}}
template on their page whenever they want to seek assistance under this scheme. I found 109 editors that showed up in
Category:Wikipedians seeking to be adopted in Adopt-a-user. I've since manually stripped out all inactive newcomers, leaving just 18 active ones. But this task will need to be repeated regularly so as to remove it from editors who have not been active for 4 weeks or more.{{
adopting}}
on their userpages. I sampled 52 entries, and conclude only 7% are active, productive editors today. I need to strip out these templates every month from all editors who have been inactive for, say 4 to 6 weeks, plus anyone at all who has total edit counts of less than 500, as they don't meet the criteria for experience.In both cases I would also wish to leave messages on the Talk pages of those two sets of editors to explain the template's removal, and what their options are if the editor resumes activity. I hope this all makes sense, and will be very grateful for any support that can be given. (Note: I will be unavailable to respond to follow-up questions between 3rd and 6th February.) Many thanks. Nick Moyes ( talk) 20:24, 1 February 2018 (UTC)