This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 55 | ← | Archive 57 | Archive 58 | Archive 59 | Archive 60 | Archive 61 | → | Archive 65 |
Would anyone be able to replace Template:Volcanism of Canada Workgroup? It has been depreciated and merged into Template:WikiProject Volcanoes, and is no longer needed. Thanks! Kevin Rutherford ( talk) 22:47, 16 February 2014 (UTC)
|Canada-importance=
parameter. Presuming the importance of the workgroup may not necessarily be the importance of the WikiProject, are these two edits correct?
[1]
[2] Thanks!
GoingBatty (
talk) 04:02, 19 February 2014 (UTC)|Canada-importance=
parameter. Could you please explain what logic the bot should use to populate |importance=
and |Canada-importance=
? I don't know how what you want when I look at
this edit and
this edit? A bot can't "make a reasonable guess". Thanks!
GoingBatty (
talk) 02:55, 1 March 2014 (UTC)
Can someone please set use a bot to replace {{ Messagebox glaciers}} with {{ WikiProject Glaciers}}. After that is done, could that bot to run through Category:Glaciers and place the template on the articles that don't already have it? Thank you. -- evrik ( talk) 03:30, 18 February 2014 (UTC)
My bot, Yobot, can do this task. -- Magioladitis ( talk) 06:48, 4 March 2014 (UTC)
At about 700 newpages will be added to the project including some pages in "category talk" namespace. -- Magioladitis ( talk) 08:00, 4 March 2014 (UTC)
Done -- Magioladitis ( talk) 08:41, 4 March 2014 (UTC)
Please could someone's bot traverse Wikipedia:List of infoboxes and compile a list of those infoboxes listed there, which are not based on either {{ Infobox}} or {{ Infobox3cols}}? Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 21:19, 2 March 2014 (UTC)
Could somebody please remove link tracking from Daily Mirror citations (and external links) (that's a UK newspaper) as in these edits? If the same can be done for other sites, so much the better. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 10:22, 13 February 2014 (UTC)
(http:\/\/www\.(?:dailymail|examiner|mirror)\.co\.uk\/[A-Za-z0-9\/\.-]+)#[A-Za-z0-9\.]{12,13}(\s|\|)
$1$2
Can someone else assist, please? Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 15:44, 4 March 2014 (UTC)
Per
Template talk:Cite DNB#CS1 errors when volume not included, could someone create a bot which would look at all the instances of {{
Cite DNB}} and {{
DNB}} (and their redirects) where |wstitle=
is populated and |volume=
if it doesn't already exist? For example,
Ralph Cudworth contains {{DNB Cite|wstitle=Cudworth, Ralph}}
which displays:
The first link takes you to
https://en.wikisource.org/wiki/Cudworth,_Ralph_(DNB00) which contains a DNB00 template with |volume=13
. The request is to change the Wikipedia article to {{DNB Cite|wstitle=Cudworth, Ralph|volume=13}}
which displays a more specific reference:
Similarly, could someone also add |volume=
to {{
Cite DCB}} (and its redirects) if it doesn't already exist? For example,
Mackenzie Bowell contains {{Canadabio|ID=7231}}
which displays:
The first link takes you to
http://www.biographi.ca/en/bio.php?id_nbr=7231 which contains var m_volume_name = 'Volume XIV (1911-1920)';
. The request is to change the Wikipedia article to {{Canadabio|ID=7231|volume=XIV}}
which displays a more specific reference:
Thanks! GoingBatty ( talk) 04:00, 20 February 2014 (UTC)
Run WP:NULLEDIT against all articles in Category:WikiProject Australian Roads articles Downsize43 ( talk) 05:55, 28 February 2014 (UTC)
I wanted to know if there was a way to get a bot to remove wikilinks in pages where the wikilink goes back to the original page. Jinkinson talk to me 05:07, 6 March 2014 (UTC)
Add this templates, each one in its articles.
-- Vivaelcelta { talk · contributions} 08:50, 6 March 2014 (UTC)
Would it be possible to create a bot that would automatically arrange dates in chronological order (ranges would probably be from the earliest one) on pages? Supernerd11 :D Firemind ^_^ Pokedex 02:00, 9 March 2014 (UTC)
It's the Falkland Islands or the Falklands, not the Falklands Islands. Would it be possible for a BOT to be set up to fix this very common typo and change all instances of "Falklands Islands" to "Falkland Islands"? Many thanks. -- Philip Stevens ( talk) 14:00, 9 March 2014 (UTC)
Might there be any way to add a flag to a specific reference that is in the standard <ref>{{cite news |title=... |url=... etc.}}</ref> footnote format that would let some cruising bot know that the news article would likely be going behind a non-free/subscription wall in a few days/weeks, and thus would be really useful for Wikipedia use if some bot got an archive URL for the news article? Sometimes the news article I come across in leaving a citation is one that I know the newspaper won't leave the source freely available for more than a few days or weeks. Aviation Week, the Financial Times and The Economist all seem to do this, as do many others.
I asked this question on the Teahouse, or rather, a question as to whether there might already exist such a method, and was referred over here, after being told that no existing bot can handle that. Here is the remainder of that conversation, which clarifies things a bit. N2e ( talk) 18:34, 6 March 2014 (UTC)
The end goal would be to have a bot that searched articles for newly-added citations, and if a citation had the appropriate flag indicating that an editor had requested an archive URL be added, the bot would obtain the archive URL, and add the "|archiveurl=http etc.
" to the particular citation by replacing the "|requestArchiveUrl=y
" flag. That's my request. Let me know if you have any questions. And I thank you for even considering it.
N2e (
talk) 18:34, 6 March 2014 (UTC)
Opinion: Any bot that auto-magically converts regular refs to archive refs should be speedy declined. The last bot we had to do automated archive url creation needed an act of Jimbo to shut down and we still have that bot's vector and site under the global spam list. I see no problem with the bot adding a template in the reference tag that adds a hidden maintenance category for editors who want to premtively work frequently deadlinked sites is not a problem, but should not touch the cite template itself. Hasteur ( talk) 02:58, 7 March 2014 (UTC)
I think we are having two discussions here, the second of which seems to me may be derailing the conversation that I initially proposed to have. But maybe I'm wrong. Please consider:
So, if it is possible, I would really appreciate it if we might separate the two conversations.
Perhaps someone could move the stuff not related to THIS SPECIFIC BOT REQUEST to somewhere else where it might be productively dealt with. (IP editors avoiding consensus, etc.)
Then, perhaps we could discuss my specific bot proposal. Does a single-purpose bot, for a bot that would do only one thing (look for a specific editor-added flag) and then create an archive url for urls that would otherwise soon go behind a subscription paywall, create a bunch of issues that would make it a problematic bot? Thanks for reading this far. Cheers. N2e ( talk) 23:46, 9 March 2014 (UTC)
The merging process basically goes like this:
I would like to have it be such that adding an {{ R from merge}} tag automatically adds the merged-to and merged-from tags. The reasons being that this tag is only placed after a completed merger, and that the to and from info is readily available. I don't know if this can be accomplished with a substitution, but a bot seemed the most likely solution. -- Nick Penguin( contribs) 03:28, 12 March 2014 (UTC)
There are too many references on Wikipedia that are simply bare URLs. Could you make a bot that notifies a user on his or her talk page if they have simply added a URL as a reference; and remind them to cite the source properly?
If such a bot already exists, is there a way to make it better? Right now this seems to be a bit of a problem. Although, I have to admit, not quite as bad as it could be.
-- Kndimov ( talk) 22:31, 13 March 2014 (UTC)
OK, we recently had a RFC on the way things worked at Did you know, here, and the thing which gained consensus was a bot to notify people when others had nominated their article for DYK, see here. There have been a few issues recently over article creators not liking hooks, or not wanting their articles nominated, which could be helped if they were aware that there was a discussion about the nomination which they could contribute to. Thanks, Mat ty. 007 11:22, 23 February 2014 (UTC)
Hi,
Am involved in Tamil Wiki Projects. I would like to improve the quality of the Tamil Wikipedia articles by adding references/citations/notes for all the articles. If there is a bot for performing the operation it would be easier.
What functionality I need?
How it can be done?
How it will help?
The same tool can be used in other wiki projects also. Hope to get a bot soon for high-quality articles in Tamil. Thanks. -- Dineshkumar Ponnusamy ( talk) 09:57, 18 March 2014 (UTC)
FYI, I've started a discussion at VPT about how we deal with unresolved requests. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 13:10, 12 March 2014 (UTC)
Every page on a human name written in conventional Western style (e.g. Jacob Henderson) should have a redirect pointing to that name from its sortname title (e.g. Henderson, Jacob). This makes it easier to find names when searching by last name, which is the common practice of print encyclopedias, and which many users may expect. This also applies to disambiguation pages (e.g. John Anderson), which are tagged with an {{ hndis}} template, and should have an incoming redirect (e.g. Anderson, John). In each case, the redirect itself should be tagged {{ R from sort name}} to indicate that it is a printworthy redirect. There are some special cases, particularly redirects with accents, diacritics, or other special characters. Where a name has a such a character, (e.g. Torbjørn Agdestein), there should be a redirect from the actual last-name-first title (e.g. Agdestein, Torbjørn) and from the sortname title (e.g. Agdestein, Torbjorn), with only the sortname title having the {{ R from sort name}} tag. I estimate that there are about a half million last-name-first redirects needing to made. Note that pages with parenthetical disambiguators (e.g. Tom Jones (singer)) do not need such a redirect. Cheers! bd2412 T 17:23, 19 March 2014 (UTC)
“ | The basic rule "Wikipedia is not paper" also applies to the titles of the articles. There is no reason to use the reverse naming in most cases, just because paper encyclopedias do that | ” |
There is disagreement about whether this TLA should redirect to Local administrative unit (the situation until a couple of weeks ago) or Lebanese American University (changed then and since reversed). The disagreement can be largely made moot by fixing the 400 or so links to LAU in articles about settlements in Portugal. Each of these contains a {{ Geobox}} Settlement template, with a link to LAU. If a bot could pipe these to "Local administrative unit", very few links to LAU would remain. Colonies Chris ( talk) 16:01, 20 March 2014 (UTC)
Hello all, I am proposing an idea based on my current activities on Wikipedia. There should be a way that the massive amount of Raster images on the project that needs converting can be changed to Vector (SVG) files, a more efficient file type that can be edited at ease. — Preceding unsigned comment added by Danielh32 ( talk • contribs) 23:41, 20 March 2014 (UTC)
There are many hundreds of articles in the subcategories of Category:Parishes of Portugal whose ({{ Geobox}}es have problems with incorrect or unnecessarily indirect or unconventionally spelled links. The first in the list below is a problem that needs to be fixed; the others are all minor improvements that could be done at the same time.
Colonies Chris ( talk) 15:38, 22 March 2014 (UTC)
For example, would it be appropriate for a bot to change the sort key of
Category:Manufacturing companies established in 1897 in Category:Manufacturing companies by year of establishment to "1897". If so, I can provide a partial list of such categories (starting with x (companies) by year of estabilshment). Further research would be required for automated sorting for categories which have both years and decades, but I asking whether this would be appropriate. Frequency would probably be "on request" (or weekly) for new additions, and monthly for maintenance, once individual patterns are established.
More complicated patterns might include "by country" lists.
My question here is more hypothetical: Would it be appropriate for a bot or AWB script to do this. — Arthur Rubin (talk) 17:12, 23 March 2014 (UTC)
I suggest a bot to work through the 6,300-odd pages in Category:Pages with missing references list to insert a References section heading (if not present) and the {{reflist}} template, ignoring the pages mentioned on the category page, the operation to be repeated at stated intervals. It looks from the description as though User:JamietwBot would have done this but is shown on its page as inactive : Noyster (talk), 13:40, 25 March 2014 (UTC)
<ref>
with no balancing </ref>
, that will also throw the error "Cite error: There are <ref>
tags on this page, but the references will not show without a {{reflist}}
template (see the
help page).", even if there is a {{
reflist}}
or <references />
later on. That </ref>
might be missing by accident; or it might be present as a typo e.g. </reg>
; or the page may have been vandalised. Each case needs to be considered in relation to its recent editing history. --
Redrose64 (
talk) 16:14, 25 March 2014 (UTC)
Is there anyone who can make a bot that will bet on different colors in roulette? For example when winning it changes to betting on black and if loses it just doubles the bet on the same color. xoxo — Preceding unsigned comment added by 217.210.30.233 ( talk) 16:34, 25 March 2014 (UTC)
The website Television Without Pity is going to be shutting down next week (they aren't clear on the exact steps, how long the content will remain), and about 500 mainspace articles use it for references for primarily television episode reviews. (My list from AWB is: User:Masem/TWOP pages). While most of the recaps are archived at Archive.org, the way TWOP paginates it stories means only the first page of these reviews/recaps are archived, and I don't immediately see any way to alter the URL to get then entire recap in one shot. Of course, the ideal route would be to have a bot run though the list of articles using televisionwithoutpity.com links and archive those links, but because of this pagination, it would require 1) figuring out how many pages there is (fortunately, the url for a specific page is simply to get) and 2) submitting each of those pages to an archive. I've altered the TV project to this issue as they might provide more suggestions here. -- MASEM ( t) 21:53, 27 March 2014 (UTC)
Hello again,
some months (or even years?) ago, I requested a mass-move and following orthography check allover the Romanian topics: Şş and Ţţ (with cedilla) are wrong, Șș and Țț (with diacritic comma) are correct. I don't remember who did it finally, but it was done.
I now see several "cedilla-s" and "cedilla-t" coming again: Could please somebody (or even the same who did it in the past) check the whole category (including the category itself) Category:Communes of Ştefan Vodă district?
Thank you (and a happy new year)! —[ ˈjøː ˌmaˑ] 11:09, 1 January 2014 (UTC)
* non-automatic anti archive line * —[ ˈjøː ˌmaˑ] 12:33, 12 March 2014 (UTC) * non-automatic anti archive line * —[ ˈjøː ˌmaˑ] 11:51, 17 March 2014 (UTC) * non-automatic anti archive line * —[ ˈjøː ˌmaˑ] 15:39, 25 March 2014 (UTC)
What exactly do you want the bot to do? There were some moving requests in the past, but not much happened due to the opposition of some in the en.wp community, so you might want to search for support first.-- Strainu ( talk) 15:22, 30 March 2014 (UTC)
i need to have a bot for filling up my all entries in my project honeypot,its my college project and so i need that to show the demo to my professors — Preceding unsigned comment added by 27.251.70.204 ( talk) 06:17, 28 March 2014 (UTC)
Please note that when Matty.007 listed the request below on Feb 23, it was responded to by Ceradon on March 2. This editor's contributions page show he did no other edits since that time except to archive his talk pages on March 6, and previously had not edited since July 2013. Can someone else please program the bot we need described below? — Maile ( talk) 13:55, 28 March 2014 (UTC)
OK, we recently had a RFC on the way things worked at Did you know, here, and the thing which gained consensus was a bot to notify people when others had nominated their article for DYK, see here. There have been a few issues recently over article creators not liking hooks, or not wanting their articles nominated, which could be helped if they were aware that there was a discussion about the nomination which they could contribute to. Thanks, Mat ty. 007 11:22, 23 February 2014 (UTC)
{{DYKmake|article title|user name}}
, with an optional subpage parameter at the end. There should be a way to opt out, so, for example, people who collaborate on articles won't be irritated with unwanted notifications. It would be great if the bot could check to see if the nominator has already notified other user(s), but maybe that's asking for too much.
MANdARAX •
XAЯAbИAM 23:37, 31 March 2014 (UTC)afi (afi.com) have reorganised their website. Links in the form connect.afi.com no longer work. According to https://en.wikipedia.org/?title=Special:LinkSearch&limit=500&offset=0&target=http%3A%2F%2Fconnect.afi.com there are 210 results. Some are personal pages or talk pages, which don't need to be changed, but all links on main wiki pages should be re-pointed:
http://connect.afi.com/site/DocServer/100Movies.pdf?docID=281 ->
http://www.afi.com/Docs/100Years/100Movies.pdf
http://connect.afi.com/site/DocServer/10top10.pdf?docID=361 ->
http://www.afi.com/Docs/100Years/TOP10.pdf
http://connect.afi.com/site/DocServer/Movies_ballot_06.pdf?docID=141 ->
http://www.afi.com/Docs/100Years/Movies_ballot_06.pdf
http://connect.afi.com/site/DocServer/TOP10.pdf ->
http://www.afi.com/Docs/100Years/TOP10.pdf
http://connect.afi.com/site/DocServer/TOP10.pdf?docID=441 ->
http://www.afi.com/Docs/100Years/TOP10.pdf
http://connect.afi.com/site/DocServer/cheers100.pdf ->
http://www.afi.com/Docs/100Years/cheers100.pdf
http://connect.afi.com/site/DocServer/cheers100.pdf?docID=202 ->
http://www.afi.com/Docs/100Years/cheers100.pdf
http://connect.afi.com/site/DocServer/cheers300.pdf?docID=201 ->
http://www.afi.com/Docs/100Years/cheers300.pdf
http://connect.afi.com/site/DocServer/handv100.pdf?docID=246 ->
http://www.afi.com/Docs/100Years/handv100.pdf
http://connect.afi.com/site/DocServer/handv400.pdf?docID=245 ->
http://www.afi.com/Docs/100Years/handv400.pdf
http://connect.afi.com/site/DocServer/laughs100.pdf?docID=252 ->
http://www.afi.com/Docs/100Years/laughs100.pdf
http://connect.afi.com/site/DocServer/laughs500.pdf?docID=251 ->
http://www.afi.com/Docs/100Years/laughs500.pdf
http://connect.afi.com/site/DocServer/movies100.pdf?docID=264 ->
http://www.afi.com/Docs/100Years/movies100.pdf
http://connect.afi.com/site/DocServer/movies400.pdf?docID=263 ->
http://www.afi.com/Docs/100Years/movies400.pdf
http://connect.afi.com/site/DocServer/passions100.pdf?docID=248l ->
http://www.afi.com/Docs/100Years/passions100.pdf
http://connect.afi.com/site/DocServer/quotes100.pdf?docID=242 ->
http://www.afi.com/Docs/100Years/quotes100.pdf
http://connect.afi.com/site/DocServer/quotes400.pdf?docID=205 ->
http://www.afi.com/Docs/100Years/quotes400.pdf
http://connect.afi.com/site/DocServer/scores250.pdf?docID=221 ->
http://www.afi.com/Docs/100Years/scores250.pdf
http://connect.afi.com/site/DocServer/scores250.pdf?docID=22 ->
http://www.afi.com/Docs/100Years/scores250.pdf
http://connect.afi.com/site/DocServer/songs100.pdf?docID+244 ->
http://www.afi.com/Docs/100Years/songs100.pdf
http://connect.afi.com/site/DocServer/songs100.pdf?docID=244 ->
http://www.afi.com/Docs/100Years/songs100.pdf
http://connect.afi.com/site/DocServer/songs400.pdf?docID=243 ->
http://www.afi.com/Docs/100Years/songs400.pdf
http://connect.afi.com/site/DocServer/stars50.pdf?docID=262 ->
http://www.afi.com/Docs/100Years/stars50.pdf
http://connect.afi.com/site/DocServer/stars500.pdf?docID=261 ->
http://www.afi.com/Docs/100Years/stars500.pdf
http://connect.afi.com/site/DocServer/thrills100.pdf?docID=250 ->
http://www.afi.com/Docs/100Years/thrills100.pdf
http://connect.afi.com/site/DocServer/thrills400.pdf?docID=249 ->
http://www.afi.com/Docs/100Years/thrills400.pdf
http://connect.afi.com/site/PageServer?pagename=100YearsList -> is the wrong link anyway, needs to be manually remapped to the right target
http://connect.afi.com/site/PageServer?pagename=micro_100landing -> is the wrong link anyway, needs to be manually remapped to the right target
http://connect.afi.com/site/DocServer/10top10.pdf?docID=381&AddInterest=1781 could be changed to http://www.afi.com/Docs/100Years/TOP10.pdf, but the former lists all the nominations (I think) whereas the latter only lists the winners.
Manolan1 ( talk) 21:17, 31 March 2014 (UTC)
Hi! I've been asked to write a new task for my bot at User talk:PotatoBot#PotatoBot for Glottolog codes?. Before I start coding, I'd like to ask if there is any bot around that can already do this (i.e. adding parameters to language infoboxes based on a list in wiki format) without [much] additional coding work. Thanks, ἀνυπόδητος ( talk) 11:25, 4 April 2014 (UTC)
They changed their online Gold Book, breaking all our direct links. Easy regexp/replacement mapping to update them...see for example this manually done edit. Better a bot than 161 articles manually. DMacks ( talk) 01:58, 3 April 2014 (UTC)
The Judicial Committee of the Privy Council and the Supreme Court of the United Kingdom have changed their website ( ref) and old links are now broken. Requested changes:
-- Txuspe ( talk) 09:00, 5 April 2014 (UTC)
There are a lot of dead links on MTA (New York City)-related pages because MTA recently moved all its pages to new URLs as of March 6, 2014. The links have been dead since that date.
For example, the URL http://www.mta.info/nyct/facts/ridership/ridership_sub_annual.htm was moved to http://web.mta.info/nyct/facts/ridership/ridership_sub_annual.htm.
The URLs all have to be changed from the format http://www.mta.info/... to http://web.mta.info/...
At least 500 pages make use of the old http://www.mta.info/... URLs. I have already notified three WikiProjects ( WP:TRAINS, WP:NYC, WP:NYCPT) about this. -- Epicgenius ( talk) 19:49, 7 April 2014 (UTC)
I updated the links on your main page Metropolitan Transportation Authority (New York) – review the diff:
—This doesn't seem that easy to automate. A bot will need to verify that (1) the current link is a "404 Page Not Found" and (2) the new link it writes actually works. – Wbm1058 ( talk) 11:51, 10 April 2014 (UTC)
Over at Wikipedia:Dispute resolution noticeboard we have accrued an ad-hock combination of scripts and templates, assisted by EarwigBot and MiszaBot. In particular, we seem to be asking The Earwig for a lot. He has been very responsive and has been great about our constant stream of requests, but rather than dumping more and more on him I am wondering whether someone who is really good at automation has the time and inclination to do a proper job of re-engineering all of our DRN automation tools from top to bottom. If we manage to get a smooth-running system working, other noticeboards might be interested in using the same system. Is anyone interested in working on this?
(Previous request: Wikipedia:Bot requests/Archive 56#Dispute resolution noticeboard, User talk:Hasteur/Archive 8#DRNBot) -- Guy Macon ( talk) 17:33, 19 March 2014 (UTC)
So, is anyone here willing to volunteer to do a proper job of re-engineering all of our DRN automation tools from top to bottom? -- Guy Macon ( talk) 16:51, 27 March 2014 (UTC)
Ok, let me try to summarize:
-- — Keithbob • Talk • 16:56, 28 March 2014 (UTC)
Add this templates, each one in its articles.
-- Vivaelcelta { talk · contributions} 07:30, 17 March 2014 (UTC)
|isbn=
in CS1 citationsAt some point, a helpful bot or human editor inserted the {{
Please check ISBN}} template within the |isbn=
parameter in citations in a thousand or so (somewhere between hundreds and two thousand) articles. This addition may have been a helpful maintenance tag at one time, but now it interferes with displaying and fixing ISBNs in citations, as documented on the template's documentation page.
Can someone please use a bot or AWB or other means to run through
Category:Pages with ISBN errors and remove all instances of {{
Please check ISBN}} from the |isbn=
parameter in citations? This will make it easier for human editors to clean up the articles in the category. There are about 6,600 articles in the category.
As a side note, it is likely that many of the articles edited by the bot will remain in
Category:Pages with ISBN errors, since CS1 citations contain code that checks |isbn=
for valid values.
A sample article that shows what this template does to citations is Theleis I Den Theleis. Thanks in advance. – Jonesey95 ( talk) 04:53, 28 March 2014 (UTC)
{{
Please check ISBN}}
to be outside the {{
cite book}}
but still be inside the <ref>...</ref>
,
like this. --
Redrose64 (
talk) 08:40, 28 March 2014 (UTC)
Here's a regex (tested successfully via AutoEd) that you can use, if you are a willing AWB user or bot operator:
(\|\s*isbn\s*=\s*[\d-X]+\s*)\{\{Please check ISBN\|reason\=[a-z\d\s\.\(\)]+\}\}
Do the find in case-insensitive mode, and replace with $1
–
Jonesey95 (
talk) 04:01, 3 April 2014 (UTC)
|isbn=
in citations. –
Jonesey95 (
talk) 23:40, 9 April 2014 (UTC)@
Jonesey95,
Redrose64, and
Rich Fambrough: I ask for the Consensus discussion as a CYA of being a bot operator. So that I understand, we are to look for the {{
Please check ISBN}}
template inside a {{
cite book}}
block and move it outside the cite book, but still have it be inside the reference section. If this is correct, I'll start tinkering with my AWB rules to work on this.
Hasteur (
talk) 00:36, 10 April 2014 (UTC)
@ Jonesey95: Ok, the regex you gave me would handle the removal of the template, however the one I used to relocate the Please check ISBN and fix the closing didn't have enough potential matches in it's class match for after the ISBN template. [14] is a edit that satisfies the relocate better. Thoughts? Hasteur ( talk) 23:21, 14 April 2014 (UTC)
Cydebot (
talk ·
contribs) keeps replacing my template {{
DVD}} with a licensing template, but it is claiming
Wikipedia:Non-free content/templates which has been marked historical since 2012, and has been unused since 2008. There's no reason for this to still be actively replacing templates, since everything will have been fixed years ago. Further, {{
DVD}}
isn't even listed as a template needing replacement on that page. --
70.24.250.235 (
talk) 04:33, 31 March 2014 (UTC)
{{
DVD}}
from the list that Cydebot works from; unfortunately, I don't know what that list might be. The other is to move {{
DVD}}
to a different name - two that spring to mind are {{
DVD navbox}}
and {{
DVD topics}}
, and then adjust the articles to use the new name. These two names have never been used before, so it's unlikely (but possible) that these names are also on Cydebot's replace list. --
Redrose64 (
talk) 10:40, 10 April 2014 (UTC)Hey, it's fixed now. The long version of this is that in 2006 a lot of templates were migrated from one set of names to another, and to enforce that, I had a periodic bot task running that renamed any new uses of those old templates. It's been long enough now that I'm pretty sure no one even remembers the old templates, let alone uses them, and there's been some conflicts now with unrelated templates coming up using the old ones' names, so I've simply canceled the task. You shouldn't see this issue again. Sorry about that! -- Cyde Weys 05:13, 13 April 2014 (UTC)
{{
Non-free DVD cover}}
in error for {{
DVD}}
, and reverted the Cydebot edit. This has caused an anomaly at
The Hits: Chapter One#Track listing; see
Wikipedia talk:WikiProject Albums#Track listings for music DVDs. --
Redrose64 (
talk) 12:28, 13 April 2014 (UTC)From the very origin of the interwiki linking system, the prefix "wiki:" was available for making links to the WikiWikiWeb. However, in the intervening years there became less and less reason to link from here to that website, and people began to increasingly confuse it with the "Wikipedia:" namespace. So, given that the WikiWikiWeb also has the prefixes "c2:" and "WikiWikiWeb:", in January of this year it was switched off. (See the discussion at Meta.) This broke just short of 1100 links across all projects. About 700 of those are on this wiki, and I'd like to request a bot run to fix them. Thankfully, as page names on the WikiWikiWeb follow a predictable format (CamelCase), it was trivial to separate them out from the links that appear to have been intended to actually be "Wikipedia:" links. I've put both lists in this paste. The format is one page ID and link destination per line, tab-separated. Some pages have more than one link that needs to be fixed and thus appear multiple times. I also stripped out a lot of obvious mistaken article links from the latter list, plus any occurring on IP users' talk pages, because there's really not much point updating those. The final total of links that need fixing is 635.
If there's any other information I need to provide please let me know. Thank you. — Scott • talk 17:38, 8 April 2014 (UTC)
Some time ago I created an index to the London Gazette. The root page is Wikipedia:London Gazette Index.
It appears (per the following note) that the pages have been migrated, and not in a predictable way. I have requested a mapping file form Authority, and will share this, as needed, if I get it. If not there are other ways to fix up the index.
=== London Gazette index ===
Rich, I randomly tried a few links from the 1918 index and it looks like the Gazette's move to it's new url and pagination has screwed things up as I got a 100% error rate. For example the 1 January issue url you have in the index is http://www.london-gazette.co.uk/issues/30453/pages/1 this is now https://www.thegazette.co.uk/London/issue/30453/page/113 and the first supplement which was http://www.london-gazette.co.uk/issues/30454/supplements/1 is now https://www.thegazette.co.uk/London/issue/30454/supplement/225 All in a bit of a pain in the backside. Nthep ( talk) 08:08, 14 April 2014 (UTC)
- Yes, there is a nice document by Tim Berners-Lee explaining why people shouldn't break the Internet like this. We are used to it, though, on Wikipedia. Unfortunately I am prohibited from fixing anything by means other than "typing in the edit box". I will make an appeal in various places for someone to fix the index. All the best, Rich Farmbrough, 19:13, 14 April 2014 (UTC).
Any takers?
All the best,
Rich
Farmbrough, 19:53, 14 April 2014 (UTC).
Hello over at Wikiproject Video games, we need to clean up our articles and remove a hell of a lot of redundant data.
Since the project started, a number of template fields have been added and removed from the main article template. We now have 15 defunct fields that still appear in the code for some article pages, and we're now in the position where all of this old data is getting in the way and making things confusing for new users (They copy over template code from existing articles only to find that some template fields aren't working after they have populated them with data.)
Initially we were just going to delete the data, but Wikidata say they want it, so that negates any semi-automatic editing as its outside the capability of AWB; and the volume of edits makes any manual effort a non-starter.
In order to aid all users (especially new ones) in editing the infobox code, and at the same time preserve data, we want to move the populated defunct fields from the template on existing article pages, to a new hidden template at the bottom of the article, so that the data in those fields can be harvested later by the squirrels at WikiData. Any field that is blank can just be binned.
As we have over 11,000 articles that need this process carried out on them, a bot really is the only way of doing this.
We have a tracking category that lists every article that needs editing.
The discussions around this job are the following:
Wikipedia_talk:WikiProject_Video_games/Archive_104#We.27re_editing_42.25_of_all_WP:VG_articles
Template_talk:Infobox_video_game#Tracking_category
Category:Infobox video game with deprecated parameters
I've also put the details in a table to make things easier to read.
Job Description
| ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
Its not going to be easy as some of the fields contain user's own unique - and sometimes differing - formatting styles, but we know you'll find a way to cope with it. Hope you can help. - X201 ( talk) 08:12, 8 April 2014 (UTC)
{{
Video game data}}
to align with {{
Persondata}}
(without being agglutinative). All the best,
Rich
Farmbrough, 19:23, 9 April 2014 (UTC).Thanks for the AWB script, much more thorough than the one I made. As for WikiData, there has been no reply to the request I placed on their bot request page. - X201 ( talk) 08:04, 14 April 2014 (UTC)
I want to create a bot that does the editing on wikipedia on Mondays, Tuesdays, Wednesdays, Thursdays, and Fridays. Anuvarshanw ( talk) 19:51, 13 April 2014 (UTC)
Request withdrawn
A bot to archive my talk-pages on the basis of signals from me. The talk-pages are the primary talk page of specified accounts and nominated holding pages. The bot should check the pages frequently, ideally by watch-listing them, or monitoring recent changes feed.
A thread will closed by me, either by a triple "-" following someone else's comment or a line starting with "--" following my own sig. (Attempted) closures by other editors, excluding Femto Bot (who knows what it is doing, and mediates all other processes that know what they are doing) should be ignored.
The entire thread from the 2nd level header to the line before the signal should be moved to the destination, preceded by one blank line. Headers should have spaces removed from between the "=="s and the title of the thread. Blank lines immediately after headers, and trailing blank lines should be removed. The "signal" line together with preceding and succeeding blank lines should be replaced with one blank line.
Edit summary should ideally identify which threads were moved where, but a summary "2 threads moved to XX, 3 threads moved to Y" is acceptable.
If no destination is specified the target page is the archive pertaining to the month of the last sig in the thread.
If the target page does not exist it should be created, with the appropriate headers, and the index/mega pages updated as required. The naming scheme for archive pages will be User talk:<account name>/Archive/YYYY Monthanme (slightly different from that used in the past). A redirect from User talk:<account name>/Archive/YYYYMM should also be created.
If an invalid destination code is left the bot should leave a note on my talk page. The bot may mark the signal line in order to simplify the task of recognising that it has already seen it.
Syntax: <closure signal>[Dest code]
The only current valid destination codes are "TODO" (case insensitive), "TALK" (case insensiteve) and BARN. TODO's target is User talk:Rich Farmbrough/To do (regardless of the account being archived) sub pages may be specified in future. TALK is the talk page of the account involved. BARN specifies that any enclosed award or gift (kittens, cookies, beer etc.) should be copied to the barnstar locations for the account involved, and the thread should be archived as normal.
All the best:
Rich
Farmbrough, 19:35, 23 April 2014 (UTC).
In order to get a grip of articles under the scope of the new WP:Physiology, I request that all articles under Category:Physiologists and Category:Physiology and all subcategories be tagged with:
{{WikiProject Physiology |class=|importance=|field=}}
Thanks in advance, -- LT910001 ( talk) 00:13, 25 April 2014 (UTC)
CAT:UAA is used to track violations of the username policy pending discussions with the users, etc. User talk pages are added to the category and removed when the relevant users are blocked, deemed to have acceptable usernames, or when they become inactive. KingpinBot removes all of the blocked accounts automatically (although it has been dormant for some time). What we need is a bot that can remove all of the inactive accounts. Standard practice at CAT:UAA is if a user has (1) been in the category for 7+ days, & (2) not edited/made log entries for 7+ days, then it is removed. Right now all of this is done manually, but it would be more efficient for a bot to do it. Is this possible? NTox · talk 04:56, 15 April 2014 (UTC)
All content. i.e. web pages, under http://hometown.aol.com/ disappeared on Oct 31,2008 (see AOL Hometown). All links to that content should be marked {{Dead link}}.
Existing bots should be modified to do this. Lentower ( talk) 21:45, 28 April 2014 (UTC)
Note: I've since found that much of this content is archived at the Internet Archive Wayback Machine at https://archive.org/ in archives before Nov 1, 2008. Archives after Oct 31, 2008 just link to root of that web site. Lentower ( talk) 13:20, 29 April 2014 (UTC)
We are starting the project Videos for Wikipedia articles. The Category:Articles containing video clips is extremely useful for understanding what kinds of videos are already used in what kinds of articles. Alas, it seems the list is not complete. Therefore we would like to request a bot to search for articles with video clips and mark them for that category. Thank you. -- Vgrass ( talk) 16:56, 22 April 2014 (UTC)
Per the discussion at Wikipedia:Village pump (proposals)/Archive 110#Bot blank and template really, really, really old IP talk pages., there is consensus to have a bot replace all content on IP talk pages with an {{ OW}} tag if:
Some editors would allow even shorter time frames, but those in the proposal are what have been unanimously approved at this point. Now all we need is a bot to take this up. Cheers! bd2412 T 17:54, 22 April 2014 (UTC)
It has been pointed out that URLs ending in period cause issues when copied to clients such as email (version of 15:25, 21 April 2014). The easy solution as proposed by SmokeyJoe is to "create a redirect without the period for every article URL ending in a period". I support this, and propose assigning the task to a bot, to first make initial redirects for existing articles ending in a period such as those on the referenced list, and then to continue making such redirects for new articles, on an ongoing basis. Cheers! bd2412 T 20:04, 24 April 2014 (UTC)
Some stats for this problem:
Tentatively, this makes the scale of the work:
- TB ( talk) 15:52, 1 May 2014 (UTC)
This request from April 10, 2014 is still unresolved. Perhaps a bot could quickly be written to fix these links; it's been two weeks since the last reply has been made to this thread, and yet no solution is in place yet. Epicgenius ( talk) 03:11, 29 April 2014 (UTC)
Bot that reads newspapers and creates an ontologyWe can use published sources to create an ontology. — Preceding unsigned comment added by Geetha nitc ( talk • contribs) 12:10, 30 April 2014 (UTC) -- Geetha nitc ( talk) 12:19, 30 April 2014 (UTC) Bot to do temporal reasoning using ontologiesInformation changes with time. There is no bot which automatically infers new information and prints the source. A human has to do this process. This bot should reason the changes with time. — Preceding unsigned comment added by Geetha nitc ( talk • contribs) 12:16, 30 April 2014 (UTC) -- Geetha nitc ( talk) 12:19, 30 April 2014 (UTC) Bot that adds information using new constructsThis bot should convert an existing ontology to text. Precisely controlled language should be used. Controlled language creates short unambiguous sentences. -- Geetha nitc ( talk) 12:22, 30 April 2014 (UTC) |
I think the report Wikipedia:WikiProject Stub sorting/missing stubs could use to be updated on a regular basis (probably monthly). Could someone create a bot to do it? There are instructions at the bottom of the page on how the report is generated. עוד מישהו Od Mishehu 04:27, 2 May 2014 (UTC)
Discretionary sanctions ( WP:AC/DS) are a procedure established by the Arbitration Committee that allow administrators to take measures such as blocks or topic bans to prevent disruption in some particularly contentious topic areas, such as the Arab-Israeli conflict or other ethno-nationalistic or ideological disputes. The Committee just enacted a new set of procedures that provide that editors must be alerted about the existence of this procedure, and that such alerts expire after a year.
Now I wonder whether it would not be better to automate this alerting procedure with a bot, in order to save editors the hassle of alerting one another manually, and to prevent these alerts from appearing to be confrontational or accusatory in nature.
The bot I envisage would add the template {{ Ds/alert}} to the talk page of any editor who edits an article page within the scope of discretionary sanctions, as determined by categories associated with the covered topics on an administrator-editable configuration page. The bot would first check whether the editor has received such an alert in the last 12 months (there is a filter for this, see {{ Sanction enforcement request header}}). The template message would be preceded by a statement such as:
I have linked to this idea from the arbitration noticeboard talk page, to allow arbitrators to indicate whether they think that such a bot would be a good idea. Sandstein 09:27, 4 May 2014 (UTC)
CAT:UAA is used to track violations of the username policy pending discussions with the users, etc. User talk pages are added to the category and removed when the relevant users are blocked, deemed to have acceptable usernames, or when they become inactive. KingpinBot removes all of the blocked accounts automatically (although it has been dormant for some time). What we need is a bot that can remove all of the inactive accounts. Standard practice at CAT:UAA is if a user has (1) been in the category for 7+ days, & (2) not edited/made log entries for 7+ days, then it is removed. Right now all of this is done manually, but it would be more efficient for a bot to do it. Is this possible? NTox · talk 04:56, 15 April 2014 (UTC)
We need a bot to upload/protect local copies of images that are going to be used on the main page, since the commons bot protection system is unreliable. Wikipedia:Bots/Requests for approval/TFA Protector Bot 2 has stalled. Is there anyone able to take up the task? Bencherlite Talk 08:24, 7 May 2014 (UTC)
This bot has been down for four months and more than 200 portals are no longer updated. -- SleaY( t) 22:35, 9 May 2014 (UTC)
This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 55 | ← | Archive 57 | Archive 58 | Archive 59 | Archive 60 | Archive 61 | → | Archive 65 |
Would anyone be able to replace Template:Volcanism of Canada Workgroup? It has been depreciated and merged into Template:WikiProject Volcanoes, and is no longer needed. Thanks! Kevin Rutherford ( talk) 22:47, 16 February 2014 (UTC)
|Canada-importance=
parameter. Presuming the importance of the workgroup may not necessarily be the importance of the WikiProject, are these two edits correct?
[1]
[2] Thanks!
GoingBatty (
talk) 04:02, 19 February 2014 (UTC)|Canada-importance=
parameter. Could you please explain what logic the bot should use to populate |importance=
and |Canada-importance=
? I don't know how what you want when I look at
this edit and
this edit? A bot can't "make a reasonable guess". Thanks!
GoingBatty (
talk) 02:55, 1 March 2014 (UTC)
Can someone please set use a bot to replace {{ Messagebox glaciers}} with {{ WikiProject Glaciers}}. After that is done, could that bot to run through Category:Glaciers and place the template on the articles that don't already have it? Thank you. -- evrik ( talk) 03:30, 18 February 2014 (UTC)
My bot, Yobot, can do this task. -- Magioladitis ( talk) 06:48, 4 March 2014 (UTC)
At about 700 newpages will be added to the project including some pages in "category talk" namespace. -- Magioladitis ( talk) 08:00, 4 March 2014 (UTC)
Done -- Magioladitis ( talk) 08:41, 4 March 2014 (UTC)
Please could someone's bot traverse Wikipedia:List of infoboxes and compile a list of those infoboxes listed there, which are not based on either {{ Infobox}} or {{ Infobox3cols}}? Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 21:19, 2 March 2014 (UTC)
Could somebody please remove link tracking from Daily Mirror citations (and external links) (that's a UK newspaper) as in these edits? If the same can be done for other sites, so much the better. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 10:22, 13 February 2014 (UTC)
(http:\/\/www\.(?:dailymail|examiner|mirror)\.co\.uk\/[A-Za-z0-9\/\.-]+)#[A-Za-z0-9\.]{12,13}(\s|\|)
$1$2
Can someone else assist, please? Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 15:44, 4 March 2014 (UTC)
Per
Template talk:Cite DNB#CS1 errors when volume not included, could someone create a bot which would look at all the instances of {{
Cite DNB}} and {{
DNB}} (and their redirects) where |wstitle=
is populated and |volume=
if it doesn't already exist? For example,
Ralph Cudworth contains {{DNB Cite|wstitle=Cudworth, Ralph}}
which displays:
The first link takes you to
https://en.wikisource.org/wiki/Cudworth,_Ralph_(DNB00) which contains a DNB00 template with |volume=13
. The request is to change the Wikipedia article to {{DNB Cite|wstitle=Cudworth, Ralph|volume=13}}
which displays a more specific reference:
Similarly, could someone also add |volume=
to {{
Cite DCB}} (and its redirects) if it doesn't already exist? For example,
Mackenzie Bowell contains {{Canadabio|ID=7231}}
which displays:
The first link takes you to
http://www.biographi.ca/en/bio.php?id_nbr=7231 which contains var m_volume_name = 'Volume XIV (1911-1920)';
. The request is to change the Wikipedia article to {{Canadabio|ID=7231|volume=XIV}}
which displays a more specific reference:
Thanks! GoingBatty ( talk) 04:00, 20 February 2014 (UTC)
Run WP:NULLEDIT against all articles in Category:WikiProject Australian Roads articles Downsize43 ( talk) 05:55, 28 February 2014 (UTC)
I wanted to know if there was a way to get a bot to remove wikilinks in pages where the wikilink goes back to the original page. Jinkinson talk to me 05:07, 6 March 2014 (UTC)
Add this templates, each one in its articles.
-- Vivaelcelta { talk · contributions} 08:50, 6 March 2014 (UTC)
Would it be possible to create a bot that would automatically arrange dates in chronological order (ranges would probably be from the earliest one) on pages? Supernerd11 :D Firemind ^_^ Pokedex 02:00, 9 March 2014 (UTC)
It's the Falkland Islands or the Falklands, not the Falklands Islands. Would it be possible for a BOT to be set up to fix this very common typo and change all instances of "Falklands Islands" to "Falkland Islands"? Many thanks. -- Philip Stevens ( talk) 14:00, 9 March 2014 (UTC)
Might there be any way to add a flag to a specific reference that is in the standard <ref>{{cite news |title=... |url=... etc.}}</ref> footnote format that would let some cruising bot know that the news article would likely be going behind a non-free/subscription wall in a few days/weeks, and thus would be really useful for Wikipedia use if some bot got an archive URL for the news article? Sometimes the news article I come across in leaving a citation is one that I know the newspaper won't leave the source freely available for more than a few days or weeks. Aviation Week, the Financial Times and The Economist all seem to do this, as do many others.
I asked this question on the Teahouse, or rather, a question as to whether there might already exist such a method, and was referred over here, after being told that no existing bot can handle that. Here is the remainder of that conversation, which clarifies things a bit. N2e ( talk) 18:34, 6 March 2014 (UTC)
The end goal would be to have a bot that searched articles for newly-added citations, and if a citation had the appropriate flag indicating that an editor had requested an archive URL be added, the bot would obtain the archive URL, and add the "|archiveurl=http etc.
" to the particular citation by replacing the "|requestArchiveUrl=y
" flag. That's my request. Let me know if you have any questions. And I thank you for even considering it.
N2e (
talk) 18:34, 6 March 2014 (UTC)
Opinion: Any bot that auto-magically converts regular refs to archive refs should be speedy declined. The last bot we had to do automated archive url creation needed an act of Jimbo to shut down and we still have that bot's vector and site under the global spam list. I see no problem with the bot adding a template in the reference tag that adds a hidden maintenance category for editors who want to premtively work frequently deadlinked sites is not a problem, but should not touch the cite template itself. Hasteur ( talk) 02:58, 7 March 2014 (UTC)
I think we are having two discussions here, the second of which seems to me may be derailing the conversation that I initially proposed to have. But maybe I'm wrong. Please consider:
So, if it is possible, I would really appreciate it if we might separate the two conversations.
Perhaps someone could move the stuff not related to THIS SPECIFIC BOT REQUEST to somewhere else where it might be productively dealt with. (IP editors avoiding consensus, etc.)
Then, perhaps we could discuss my specific bot proposal. Does a single-purpose bot, for a bot that would do only one thing (look for a specific editor-added flag) and then create an archive url for urls that would otherwise soon go behind a subscription paywall, create a bunch of issues that would make it a problematic bot? Thanks for reading this far. Cheers. N2e ( talk) 23:46, 9 March 2014 (UTC)
The merging process basically goes like this:
I would like to have it be such that adding an {{ R from merge}} tag automatically adds the merged-to and merged-from tags. The reasons being that this tag is only placed after a completed merger, and that the to and from info is readily available. I don't know if this can be accomplished with a substitution, but a bot seemed the most likely solution. -- Nick Penguin( contribs) 03:28, 12 March 2014 (UTC)
There are too many references on Wikipedia that are simply bare URLs. Could you make a bot that notifies a user on his or her talk page if they have simply added a URL as a reference; and remind them to cite the source properly?
If such a bot already exists, is there a way to make it better? Right now this seems to be a bit of a problem. Although, I have to admit, not quite as bad as it could be.
-- Kndimov ( talk) 22:31, 13 March 2014 (UTC)
OK, we recently had a RFC on the way things worked at Did you know, here, and the thing which gained consensus was a bot to notify people when others had nominated their article for DYK, see here. There have been a few issues recently over article creators not liking hooks, or not wanting their articles nominated, which could be helped if they were aware that there was a discussion about the nomination which they could contribute to. Thanks, Mat ty. 007 11:22, 23 February 2014 (UTC)
Hi,
Am involved in Tamil Wiki Projects. I would like to improve the quality of the Tamil Wikipedia articles by adding references/citations/notes for all the articles. If there is a bot for performing the operation it would be easier.
What functionality I need?
How it can be done?
How it will help?
The same tool can be used in other wiki projects also. Hope to get a bot soon for high-quality articles in Tamil. Thanks. -- Dineshkumar Ponnusamy ( talk) 09:57, 18 March 2014 (UTC)
FYI, I've started a discussion at VPT about how we deal with unresolved requests. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 13:10, 12 March 2014 (UTC)
Every page on a human name written in conventional Western style (e.g. Jacob Henderson) should have a redirect pointing to that name from its sortname title (e.g. Henderson, Jacob). This makes it easier to find names when searching by last name, which is the common practice of print encyclopedias, and which many users may expect. This also applies to disambiguation pages (e.g. John Anderson), which are tagged with an {{ hndis}} template, and should have an incoming redirect (e.g. Anderson, John). In each case, the redirect itself should be tagged {{ R from sort name}} to indicate that it is a printworthy redirect. There are some special cases, particularly redirects with accents, diacritics, or other special characters. Where a name has a such a character, (e.g. Torbjørn Agdestein), there should be a redirect from the actual last-name-first title (e.g. Agdestein, Torbjørn) and from the sortname title (e.g. Agdestein, Torbjorn), with only the sortname title having the {{ R from sort name}} tag. I estimate that there are about a half million last-name-first redirects needing to made. Note that pages with parenthetical disambiguators (e.g. Tom Jones (singer)) do not need such a redirect. Cheers! bd2412 T 17:23, 19 March 2014 (UTC)
“ | The basic rule "Wikipedia is not paper" also applies to the titles of the articles. There is no reason to use the reverse naming in most cases, just because paper encyclopedias do that | ” |
There is disagreement about whether this TLA should redirect to Local administrative unit (the situation until a couple of weeks ago) or Lebanese American University (changed then and since reversed). The disagreement can be largely made moot by fixing the 400 or so links to LAU in articles about settlements in Portugal. Each of these contains a {{ Geobox}} Settlement template, with a link to LAU. If a bot could pipe these to "Local administrative unit", very few links to LAU would remain. Colonies Chris ( talk) 16:01, 20 March 2014 (UTC)
Hello all, I am proposing an idea based on my current activities on Wikipedia. There should be a way that the massive amount of Raster images on the project that needs converting can be changed to Vector (SVG) files, a more efficient file type that can be edited at ease. — Preceding unsigned comment added by Danielh32 ( talk • contribs) 23:41, 20 March 2014 (UTC)
There are many hundreds of articles in the subcategories of Category:Parishes of Portugal whose ({{ Geobox}}es have problems with incorrect or unnecessarily indirect or unconventionally spelled links. The first in the list below is a problem that needs to be fixed; the others are all minor improvements that could be done at the same time.
Colonies Chris ( talk) 15:38, 22 March 2014 (UTC)
For example, would it be appropriate for a bot to change the sort key of
Category:Manufacturing companies established in 1897 in Category:Manufacturing companies by year of establishment to "1897". If so, I can provide a partial list of such categories (starting with x (companies) by year of estabilshment). Further research would be required for automated sorting for categories which have both years and decades, but I asking whether this would be appropriate. Frequency would probably be "on request" (or weekly) for new additions, and monthly for maintenance, once individual patterns are established.
More complicated patterns might include "by country" lists.
My question here is more hypothetical: Would it be appropriate for a bot or AWB script to do this. — Arthur Rubin (talk) 17:12, 23 March 2014 (UTC)
I suggest a bot to work through the 6,300-odd pages in Category:Pages with missing references list to insert a References section heading (if not present) and the {{reflist}} template, ignoring the pages mentioned on the category page, the operation to be repeated at stated intervals. It looks from the description as though User:JamietwBot would have done this but is shown on its page as inactive : Noyster (talk), 13:40, 25 March 2014 (UTC)
<ref>
with no balancing </ref>
, that will also throw the error "Cite error: There are <ref>
tags on this page, but the references will not show without a {{reflist}}
template (see the
help page).", even if there is a {{
reflist}}
or <references />
later on. That </ref>
might be missing by accident; or it might be present as a typo e.g. </reg>
; or the page may have been vandalised. Each case needs to be considered in relation to its recent editing history. --
Redrose64 (
talk) 16:14, 25 March 2014 (UTC)
Is there anyone who can make a bot that will bet on different colors in roulette? For example when winning it changes to betting on black and if loses it just doubles the bet on the same color. xoxo — Preceding unsigned comment added by 217.210.30.233 ( talk) 16:34, 25 March 2014 (UTC)
The website Television Without Pity is going to be shutting down next week (they aren't clear on the exact steps, how long the content will remain), and about 500 mainspace articles use it for references for primarily television episode reviews. (My list from AWB is: User:Masem/TWOP pages). While most of the recaps are archived at Archive.org, the way TWOP paginates it stories means only the first page of these reviews/recaps are archived, and I don't immediately see any way to alter the URL to get then entire recap in one shot. Of course, the ideal route would be to have a bot run though the list of articles using televisionwithoutpity.com links and archive those links, but because of this pagination, it would require 1) figuring out how many pages there is (fortunately, the url for a specific page is simply to get) and 2) submitting each of those pages to an archive. I've altered the TV project to this issue as they might provide more suggestions here. -- MASEM ( t) 21:53, 27 March 2014 (UTC)
Hello again,
some months (or even years?) ago, I requested a mass-move and following orthography check allover the Romanian topics: Şş and Ţţ (with cedilla) are wrong, Șș and Țț (with diacritic comma) are correct. I don't remember who did it finally, but it was done.
I now see several "cedilla-s" and "cedilla-t" coming again: Could please somebody (or even the same who did it in the past) check the whole category (including the category itself) Category:Communes of Ştefan Vodă district?
Thank you (and a happy new year)! —[ ˈjøː ˌmaˑ] 11:09, 1 January 2014 (UTC)
* non-automatic anti archive line * —[ ˈjøː ˌmaˑ] 12:33, 12 March 2014 (UTC) * non-automatic anti archive line * —[ ˈjøː ˌmaˑ] 11:51, 17 March 2014 (UTC) * non-automatic anti archive line * —[ ˈjøː ˌmaˑ] 15:39, 25 March 2014 (UTC)
What exactly do you want the bot to do? There were some moving requests in the past, but not much happened due to the opposition of some in the en.wp community, so you might want to search for support first.-- Strainu ( talk) 15:22, 30 March 2014 (UTC)
i need to have a bot for filling up my all entries in my project honeypot,its my college project and so i need that to show the demo to my professors — Preceding unsigned comment added by 27.251.70.204 ( talk) 06:17, 28 March 2014 (UTC)
Please note that when Matty.007 listed the request below on Feb 23, it was responded to by Ceradon on March 2. This editor's contributions page show he did no other edits since that time except to archive his talk pages on March 6, and previously had not edited since July 2013. Can someone else please program the bot we need described below? — Maile ( talk) 13:55, 28 March 2014 (UTC)
OK, we recently had a RFC on the way things worked at Did you know, here, and the thing which gained consensus was a bot to notify people when others had nominated their article for DYK, see here. There have been a few issues recently over article creators not liking hooks, or not wanting their articles nominated, which could be helped if they were aware that there was a discussion about the nomination which they could contribute to. Thanks, Mat ty. 007 11:22, 23 February 2014 (UTC)
{{DYKmake|article title|user name}}
, with an optional subpage parameter at the end. There should be a way to opt out, so, for example, people who collaborate on articles won't be irritated with unwanted notifications. It would be great if the bot could check to see if the nominator has already notified other user(s), but maybe that's asking for too much.
MANdARAX •
XAЯAbИAM 23:37, 31 March 2014 (UTC)afi (afi.com) have reorganised their website. Links in the form connect.afi.com no longer work. According to https://en.wikipedia.org/?title=Special:LinkSearch&limit=500&offset=0&target=http%3A%2F%2Fconnect.afi.com there are 210 results. Some are personal pages or talk pages, which don't need to be changed, but all links on main wiki pages should be re-pointed:
http://connect.afi.com/site/DocServer/100Movies.pdf?docID=281 ->
http://www.afi.com/Docs/100Years/100Movies.pdf
http://connect.afi.com/site/DocServer/10top10.pdf?docID=361 ->
http://www.afi.com/Docs/100Years/TOP10.pdf
http://connect.afi.com/site/DocServer/Movies_ballot_06.pdf?docID=141 ->
http://www.afi.com/Docs/100Years/Movies_ballot_06.pdf
http://connect.afi.com/site/DocServer/TOP10.pdf ->
http://www.afi.com/Docs/100Years/TOP10.pdf
http://connect.afi.com/site/DocServer/TOP10.pdf?docID=441 ->
http://www.afi.com/Docs/100Years/TOP10.pdf
http://connect.afi.com/site/DocServer/cheers100.pdf ->
http://www.afi.com/Docs/100Years/cheers100.pdf
http://connect.afi.com/site/DocServer/cheers100.pdf?docID=202 ->
http://www.afi.com/Docs/100Years/cheers100.pdf
http://connect.afi.com/site/DocServer/cheers300.pdf?docID=201 ->
http://www.afi.com/Docs/100Years/cheers300.pdf
http://connect.afi.com/site/DocServer/handv100.pdf?docID=246 ->
http://www.afi.com/Docs/100Years/handv100.pdf
http://connect.afi.com/site/DocServer/handv400.pdf?docID=245 ->
http://www.afi.com/Docs/100Years/handv400.pdf
http://connect.afi.com/site/DocServer/laughs100.pdf?docID=252 ->
http://www.afi.com/Docs/100Years/laughs100.pdf
http://connect.afi.com/site/DocServer/laughs500.pdf?docID=251 ->
http://www.afi.com/Docs/100Years/laughs500.pdf
http://connect.afi.com/site/DocServer/movies100.pdf?docID=264 ->
http://www.afi.com/Docs/100Years/movies100.pdf
http://connect.afi.com/site/DocServer/movies400.pdf?docID=263 ->
http://www.afi.com/Docs/100Years/movies400.pdf
http://connect.afi.com/site/DocServer/passions100.pdf?docID=248l ->
http://www.afi.com/Docs/100Years/passions100.pdf
http://connect.afi.com/site/DocServer/quotes100.pdf?docID=242 ->
http://www.afi.com/Docs/100Years/quotes100.pdf
http://connect.afi.com/site/DocServer/quotes400.pdf?docID=205 ->
http://www.afi.com/Docs/100Years/quotes400.pdf
http://connect.afi.com/site/DocServer/scores250.pdf?docID=221 ->
http://www.afi.com/Docs/100Years/scores250.pdf
http://connect.afi.com/site/DocServer/scores250.pdf?docID=22 ->
http://www.afi.com/Docs/100Years/scores250.pdf
http://connect.afi.com/site/DocServer/songs100.pdf?docID+244 ->
http://www.afi.com/Docs/100Years/songs100.pdf
http://connect.afi.com/site/DocServer/songs100.pdf?docID=244 ->
http://www.afi.com/Docs/100Years/songs100.pdf
http://connect.afi.com/site/DocServer/songs400.pdf?docID=243 ->
http://www.afi.com/Docs/100Years/songs400.pdf
http://connect.afi.com/site/DocServer/stars50.pdf?docID=262 ->
http://www.afi.com/Docs/100Years/stars50.pdf
http://connect.afi.com/site/DocServer/stars500.pdf?docID=261 ->
http://www.afi.com/Docs/100Years/stars500.pdf
http://connect.afi.com/site/DocServer/thrills100.pdf?docID=250 ->
http://www.afi.com/Docs/100Years/thrills100.pdf
http://connect.afi.com/site/DocServer/thrills400.pdf?docID=249 ->
http://www.afi.com/Docs/100Years/thrills400.pdf
http://connect.afi.com/site/PageServer?pagename=100YearsList -> is the wrong link anyway, needs to be manually remapped to the right target
http://connect.afi.com/site/PageServer?pagename=micro_100landing -> is the wrong link anyway, needs to be manually remapped to the right target
http://connect.afi.com/site/DocServer/10top10.pdf?docID=381&AddInterest=1781 could be changed to http://www.afi.com/Docs/100Years/TOP10.pdf, but the former lists all the nominations (I think) whereas the latter only lists the winners.
Manolan1 ( talk) 21:17, 31 March 2014 (UTC)
Hi! I've been asked to write a new task for my bot at User talk:PotatoBot#PotatoBot for Glottolog codes?. Before I start coding, I'd like to ask if there is any bot around that can already do this (i.e. adding parameters to language infoboxes based on a list in wiki format) without [much] additional coding work. Thanks, ἀνυπόδητος ( talk) 11:25, 4 April 2014 (UTC)
They changed their online Gold Book, breaking all our direct links. Easy regexp/replacement mapping to update them...see for example this manually done edit. Better a bot than 161 articles manually. DMacks ( talk) 01:58, 3 April 2014 (UTC)
The Judicial Committee of the Privy Council and the Supreme Court of the United Kingdom have changed their website ( ref) and old links are now broken. Requested changes:
-- Txuspe ( talk) 09:00, 5 April 2014 (UTC)
There are a lot of dead links on MTA (New York City)-related pages because MTA recently moved all its pages to new URLs as of March 6, 2014. The links have been dead since that date.
For example, the URL http://www.mta.info/nyct/facts/ridership/ridership_sub_annual.htm was moved to http://web.mta.info/nyct/facts/ridership/ridership_sub_annual.htm.
The URLs all have to be changed from the format http://www.mta.info/... to http://web.mta.info/...
At least 500 pages make use of the old http://www.mta.info/... URLs. I have already notified three WikiProjects ( WP:TRAINS, WP:NYC, WP:NYCPT) about this. -- Epicgenius ( talk) 19:49, 7 April 2014 (UTC)
I updated the links on your main page Metropolitan Transportation Authority (New York) – review the diff:
—This doesn't seem that easy to automate. A bot will need to verify that (1) the current link is a "404 Page Not Found" and (2) the new link it writes actually works. – Wbm1058 ( talk) 11:51, 10 April 2014 (UTC)
Over at Wikipedia:Dispute resolution noticeboard we have accrued an ad-hock combination of scripts and templates, assisted by EarwigBot and MiszaBot. In particular, we seem to be asking The Earwig for a lot. He has been very responsive and has been great about our constant stream of requests, but rather than dumping more and more on him I am wondering whether someone who is really good at automation has the time and inclination to do a proper job of re-engineering all of our DRN automation tools from top to bottom. If we manage to get a smooth-running system working, other noticeboards might be interested in using the same system. Is anyone interested in working on this?
(Previous request: Wikipedia:Bot requests/Archive 56#Dispute resolution noticeboard, User talk:Hasteur/Archive 8#DRNBot) -- Guy Macon ( talk) 17:33, 19 March 2014 (UTC)
So, is anyone here willing to volunteer to do a proper job of re-engineering all of our DRN automation tools from top to bottom? -- Guy Macon ( talk) 16:51, 27 March 2014 (UTC)
Ok, let me try to summarize:
-- — Keithbob • Talk • 16:56, 28 March 2014 (UTC)
Add this templates, each one in its articles.
-- Vivaelcelta { talk · contributions} 07:30, 17 March 2014 (UTC)
|isbn=
in CS1 citationsAt some point, a helpful bot or human editor inserted the {{
Please check ISBN}} template within the |isbn=
parameter in citations in a thousand or so (somewhere between hundreds and two thousand) articles. This addition may have been a helpful maintenance tag at one time, but now it interferes with displaying and fixing ISBNs in citations, as documented on the template's documentation page.
Can someone please use a bot or AWB or other means to run through
Category:Pages with ISBN errors and remove all instances of {{
Please check ISBN}} from the |isbn=
parameter in citations? This will make it easier for human editors to clean up the articles in the category. There are about 6,600 articles in the category.
As a side note, it is likely that many of the articles edited by the bot will remain in
Category:Pages with ISBN errors, since CS1 citations contain code that checks |isbn=
for valid values.
A sample article that shows what this template does to citations is Theleis I Den Theleis. Thanks in advance. – Jonesey95 ( talk) 04:53, 28 March 2014 (UTC)
{{
Please check ISBN}}
to be outside the {{
cite book}}
but still be inside the <ref>...</ref>
,
like this. --
Redrose64 (
talk) 08:40, 28 March 2014 (UTC)
Here's a regex (tested successfully via AutoEd) that you can use, if you are a willing AWB user or bot operator:
(\|\s*isbn\s*=\s*[\d-X]+\s*)\{\{Please check ISBN\|reason\=[a-z\d\s\.\(\)]+\}\}
Do the find in case-insensitive mode, and replace with $1
–
Jonesey95 (
talk) 04:01, 3 April 2014 (UTC)
|isbn=
in citations. –
Jonesey95 (
talk) 23:40, 9 April 2014 (UTC)@
Jonesey95,
Redrose64, and
Rich Fambrough: I ask for the Consensus discussion as a CYA of being a bot operator. So that I understand, we are to look for the {{
Please check ISBN}}
template inside a {{
cite book}}
block and move it outside the cite book, but still have it be inside the reference section. If this is correct, I'll start tinkering with my AWB rules to work on this.
Hasteur (
talk) 00:36, 10 April 2014 (UTC)
@ Jonesey95: Ok, the regex you gave me would handle the removal of the template, however the one I used to relocate the Please check ISBN and fix the closing didn't have enough potential matches in it's class match for after the ISBN template. [14] is a edit that satisfies the relocate better. Thoughts? Hasteur ( talk) 23:21, 14 April 2014 (UTC)
Cydebot (
talk ·
contribs) keeps replacing my template {{
DVD}} with a licensing template, but it is claiming
Wikipedia:Non-free content/templates which has been marked historical since 2012, and has been unused since 2008. There's no reason for this to still be actively replacing templates, since everything will have been fixed years ago. Further, {{
DVD}}
isn't even listed as a template needing replacement on that page. --
70.24.250.235 (
talk) 04:33, 31 March 2014 (UTC)
{{
DVD}}
from the list that Cydebot works from; unfortunately, I don't know what that list might be. The other is to move {{
DVD}}
to a different name - two that spring to mind are {{
DVD navbox}}
and {{
DVD topics}}
, and then adjust the articles to use the new name. These two names have never been used before, so it's unlikely (but possible) that these names are also on Cydebot's replace list. --
Redrose64 (
talk) 10:40, 10 April 2014 (UTC)Hey, it's fixed now. The long version of this is that in 2006 a lot of templates were migrated from one set of names to another, and to enforce that, I had a periodic bot task running that renamed any new uses of those old templates. It's been long enough now that I'm pretty sure no one even remembers the old templates, let alone uses them, and there's been some conflicts now with unrelated templates coming up using the old ones' names, so I've simply canceled the task. You shouldn't see this issue again. Sorry about that! -- Cyde Weys 05:13, 13 April 2014 (UTC)
{{
Non-free DVD cover}}
in error for {{
DVD}}
, and reverted the Cydebot edit. This has caused an anomaly at
The Hits: Chapter One#Track listing; see
Wikipedia talk:WikiProject Albums#Track listings for music DVDs. --
Redrose64 (
talk) 12:28, 13 April 2014 (UTC)From the very origin of the interwiki linking system, the prefix "wiki:" was available for making links to the WikiWikiWeb. However, in the intervening years there became less and less reason to link from here to that website, and people began to increasingly confuse it with the "Wikipedia:" namespace. So, given that the WikiWikiWeb also has the prefixes "c2:" and "WikiWikiWeb:", in January of this year it was switched off. (See the discussion at Meta.) This broke just short of 1100 links across all projects. About 700 of those are on this wiki, and I'd like to request a bot run to fix them. Thankfully, as page names on the WikiWikiWeb follow a predictable format (CamelCase), it was trivial to separate them out from the links that appear to have been intended to actually be "Wikipedia:" links. I've put both lists in this paste. The format is one page ID and link destination per line, tab-separated. Some pages have more than one link that needs to be fixed and thus appear multiple times. I also stripped out a lot of obvious mistaken article links from the latter list, plus any occurring on IP users' talk pages, because there's really not much point updating those. The final total of links that need fixing is 635.
If there's any other information I need to provide please let me know. Thank you. — Scott • talk 17:38, 8 April 2014 (UTC)
Some time ago I created an index to the London Gazette. The root page is Wikipedia:London Gazette Index.
It appears (per the following note) that the pages have been migrated, and not in a predictable way. I have requested a mapping file form Authority, and will share this, as needed, if I get it. If not there are other ways to fix up the index.
=== London Gazette index ===
Rich, I randomly tried a few links from the 1918 index and it looks like the Gazette's move to it's new url and pagination has screwed things up as I got a 100% error rate. For example the 1 January issue url you have in the index is http://www.london-gazette.co.uk/issues/30453/pages/1 this is now https://www.thegazette.co.uk/London/issue/30453/page/113 and the first supplement which was http://www.london-gazette.co.uk/issues/30454/supplements/1 is now https://www.thegazette.co.uk/London/issue/30454/supplement/225 All in a bit of a pain in the backside. Nthep ( talk) 08:08, 14 April 2014 (UTC)
- Yes, there is a nice document by Tim Berners-Lee explaining why people shouldn't break the Internet like this. We are used to it, though, on Wikipedia. Unfortunately I am prohibited from fixing anything by means other than "typing in the edit box". I will make an appeal in various places for someone to fix the index. All the best, Rich Farmbrough, 19:13, 14 April 2014 (UTC).
Any takers?
All the best,
Rich
Farmbrough, 19:53, 14 April 2014 (UTC).
Hello over at Wikiproject Video games, we need to clean up our articles and remove a hell of a lot of redundant data.
Since the project started, a number of template fields have been added and removed from the main article template. We now have 15 defunct fields that still appear in the code for some article pages, and we're now in the position where all of this old data is getting in the way and making things confusing for new users (They copy over template code from existing articles only to find that some template fields aren't working after they have populated them with data.)
Initially we were just going to delete the data, but Wikidata say they want it, so that negates any semi-automatic editing as its outside the capability of AWB; and the volume of edits makes any manual effort a non-starter.
In order to aid all users (especially new ones) in editing the infobox code, and at the same time preserve data, we want to move the populated defunct fields from the template on existing article pages, to a new hidden template at the bottom of the article, so that the data in those fields can be harvested later by the squirrels at WikiData. Any field that is blank can just be binned.
As we have over 11,000 articles that need this process carried out on them, a bot really is the only way of doing this.
We have a tracking category that lists every article that needs editing.
The discussions around this job are the following:
Wikipedia_talk:WikiProject_Video_games/Archive_104#We.27re_editing_42.25_of_all_WP:VG_articles
Template_talk:Infobox_video_game#Tracking_category
Category:Infobox video game with deprecated parameters
I've also put the details in a table to make things easier to read.
Job Description
| ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
Its not going to be easy as some of the fields contain user's own unique - and sometimes differing - formatting styles, but we know you'll find a way to cope with it. Hope you can help. - X201 ( talk) 08:12, 8 April 2014 (UTC)
{{
Video game data}}
to align with {{
Persondata}}
(without being agglutinative). All the best,
Rich
Farmbrough, 19:23, 9 April 2014 (UTC).Thanks for the AWB script, much more thorough than the one I made. As for WikiData, there has been no reply to the request I placed on their bot request page. - X201 ( talk) 08:04, 14 April 2014 (UTC)
I want to create a bot that does the editing on wikipedia on Mondays, Tuesdays, Wednesdays, Thursdays, and Fridays. Anuvarshanw ( talk) 19:51, 13 April 2014 (UTC)
Request withdrawn
A bot to archive my talk-pages on the basis of signals from me. The talk-pages are the primary talk page of specified accounts and nominated holding pages. The bot should check the pages frequently, ideally by watch-listing them, or monitoring recent changes feed.
A thread will closed by me, either by a triple "-" following someone else's comment or a line starting with "--" following my own sig. (Attempted) closures by other editors, excluding Femto Bot (who knows what it is doing, and mediates all other processes that know what they are doing) should be ignored.
The entire thread from the 2nd level header to the line before the signal should be moved to the destination, preceded by one blank line. Headers should have spaces removed from between the "=="s and the title of the thread. Blank lines immediately after headers, and trailing blank lines should be removed. The "signal" line together with preceding and succeeding blank lines should be replaced with one blank line.
Edit summary should ideally identify which threads were moved where, but a summary "2 threads moved to XX, 3 threads moved to Y" is acceptable.
If no destination is specified the target page is the archive pertaining to the month of the last sig in the thread.
If the target page does not exist it should be created, with the appropriate headers, and the index/mega pages updated as required. The naming scheme for archive pages will be User talk:<account name>/Archive/YYYY Monthanme (slightly different from that used in the past). A redirect from User talk:<account name>/Archive/YYYYMM should also be created.
If an invalid destination code is left the bot should leave a note on my talk page. The bot may mark the signal line in order to simplify the task of recognising that it has already seen it.
Syntax: <closure signal>[Dest code]
The only current valid destination codes are "TODO" (case insensitive), "TALK" (case insensiteve) and BARN. TODO's target is User talk:Rich Farmbrough/To do (regardless of the account being archived) sub pages may be specified in future. TALK is the talk page of the account involved. BARN specifies that any enclosed award or gift (kittens, cookies, beer etc.) should be copied to the barnstar locations for the account involved, and the thread should be archived as normal.
All the best:
Rich
Farmbrough, 19:35, 23 April 2014 (UTC).
In order to get a grip of articles under the scope of the new WP:Physiology, I request that all articles under Category:Physiologists and Category:Physiology and all subcategories be tagged with:
{{WikiProject Physiology |class=|importance=|field=}}
Thanks in advance, -- LT910001 ( talk) 00:13, 25 April 2014 (UTC)
CAT:UAA is used to track violations of the username policy pending discussions with the users, etc. User talk pages are added to the category and removed when the relevant users are blocked, deemed to have acceptable usernames, or when they become inactive. KingpinBot removes all of the blocked accounts automatically (although it has been dormant for some time). What we need is a bot that can remove all of the inactive accounts. Standard practice at CAT:UAA is if a user has (1) been in the category for 7+ days, & (2) not edited/made log entries for 7+ days, then it is removed. Right now all of this is done manually, but it would be more efficient for a bot to do it. Is this possible? NTox · talk 04:56, 15 April 2014 (UTC)
All content. i.e. web pages, under http://hometown.aol.com/ disappeared on Oct 31,2008 (see AOL Hometown). All links to that content should be marked {{Dead link}}.
Existing bots should be modified to do this. Lentower ( talk) 21:45, 28 April 2014 (UTC)
Note: I've since found that much of this content is archived at the Internet Archive Wayback Machine at https://archive.org/ in archives before Nov 1, 2008. Archives after Oct 31, 2008 just link to root of that web site. Lentower ( talk) 13:20, 29 April 2014 (UTC)
We are starting the project Videos for Wikipedia articles. The Category:Articles containing video clips is extremely useful for understanding what kinds of videos are already used in what kinds of articles. Alas, it seems the list is not complete. Therefore we would like to request a bot to search for articles with video clips and mark them for that category. Thank you. -- Vgrass ( talk) 16:56, 22 April 2014 (UTC)
Per the discussion at Wikipedia:Village pump (proposals)/Archive 110#Bot blank and template really, really, really old IP talk pages., there is consensus to have a bot replace all content on IP talk pages with an {{ OW}} tag if:
Some editors would allow even shorter time frames, but those in the proposal are what have been unanimously approved at this point. Now all we need is a bot to take this up. Cheers! bd2412 T 17:54, 22 April 2014 (UTC)
It has been pointed out that URLs ending in period cause issues when copied to clients such as email (version of 15:25, 21 April 2014). The easy solution as proposed by SmokeyJoe is to "create a redirect without the period for every article URL ending in a period". I support this, and propose assigning the task to a bot, to first make initial redirects for existing articles ending in a period such as those on the referenced list, and then to continue making such redirects for new articles, on an ongoing basis. Cheers! bd2412 T 20:04, 24 April 2014 (UTC)
Some stats for this problem:
Tentatively, this makes the scale of the work:
- TB ( talk) 15:52, 1 May 2014 (UTC)
This request from April 10, 2014 is still unresolved. Perhaps a bot could quickly be written to fix these links; it's been two weeks since the last reply has been made to this thread, and yet no solution is in place yet. Epicgenius ( talk) 03:11, 29 April 2014 (UTC)
Bot that reads newspapers and creates an ontologyWe can use published sources to create an ontology. — Preceding unsigned comment added by Geetha nitc ( talk • contribs) 12:10, 30 April 2014 (UTC) -- Geetha nitc ( talk) 12:19, 30 April 2014 (UTC) Bot to do temporal reasoning using ontologiesInformation changes with time. There is no bot which automatically infers new information and prints the source. A human has to do this process. This bot should reason the changes with time. — Preceding unsigned comment added by Geetha nitc ( talk • contribs) 12:16, 30 April 2014 (UTC) -- Geetha nitc ( talk) 12:19, 30 April 2014 (UTC) Bot that adds information using new constructsThis bot should convert an existing ontology to text. Precisely controlled language should be used. Controlled language creates short unambiguous sentences. -- Geetha nitc ( talk) 12:22, 30 April 2014 (UTC) |
I think the report Wikipedia:WikiProject Stub sorting/missing stubs could use to be updated on a regular basis (probably monthly). Could someone create a bot to do it? There are instructions at the bottom of the page on how the report is generated. עוד מישהו Od Mishehu 04:27, 2 May 2014 (UTC)
Discretionary sanctions ( WP:AC/DS) are a procedure established by the Arbitration Committee that allow administrators to take measures such as blocks or topic bans to prevent disruption in some particularly contentious topic areas, such as the Arab-Israeli conflict or other ethno-nationalistic or ideological disputes. The Committee just enacted a new set of procedures that provide that editors must be alerted about the existence of this procedure, and that such alerts expire after a year.
Now I wonder whether it would not be better to automate this alerting procedure with a bot, in order to save editors the hassle of alerting one another manually, and to prevent these alerts from appearing to be confrontational or accusatory in nature.
The bot I envisage would add the template {{ Ds/alert}} to the talk page of any editor who edits an article page within the scope of discretionary sanctions, as determined by categories associated with the covered topics on an administrator-editable configuration page. The bot would first check whether the editor has received such an alert in the last 12 months (there is a filter for this, see {{ Sanction enforcement request header}}). The template message would be preceded by a statement such as:
I have linked to this idea from the arbitration noticeboard talk page, to allow arbitrators to indicate whether they think that such a bot would be a good idea. Sandstein 09:27, 4 May 2014 (UTC)
CAT:UAA is used to track violations of the username policy pending discussions with the users, etc. User talk pages are added to the category and removed when the relevant users are blocked, deemed to have acceptable usernames, or when they become inactive. KingpinBot removes all of the blocked accounts automatically (although it has been dormant for some time). What we need is a bot that can remove all of the inactive accounts. Standard practice at CAT:UAA is if a user has (1) been in the category for 7+ days, & (2) not edited/made log entries for 7+ days, then it is removed. Right now all of this is done manually, but it would be more efficient for a bot to do it. Is this possible? NTox · talk 04:56, 15 April 2014 (UTC)
We need a bot to upload/protect local copies of images that are going to be used on the main page, since the commons bot protection system is unreliable. Wikipedia:Bots/Requests for approval/TFA Protector Bot 2 has stalled. Is there anyone able to take up the task? Bencherlite Talk 08:24, 7 May 2014 (UTC)
This bot has been down for four months and more than 200 portals are no longer updated. -- SleaY( t) 22:35, 9 May 2014 (UTC)