This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 45 | ← | Archive 50 | Archive 51 | Archive 52 | Archive 53 | Archive 54 | Archive 55 |
Is it possible to automatically remove the {{Wikivoyage-inline}}
templates in the external links sections of (all?) articles on geographical objects and instead switch the option voy=PAGETITLE
(or so) on in the box with the sister projects links, now that Wikivoyage is officially a sister project? --
Florian Blaschke (
talk) 18:49, 16 January 2013 (UTC)
Please change Category:Yokohama Flugels players to Category:Yokohama Flügels players like this. This football club exact name was Yokohama Flügels, not Yokohama Flugels. Thanks. -- Japan Football ( talk) 14:16, 20 January 2013 (UTC)
I am trying to clear the backlog at Category:Articles with missing files but it is proving to be difficult and it is mind-numbing and there are better things to do. New pages are continually coming in and I suspect that those who monitor recent changes are not picking up the problematic edits because images do not show on page diffs. There are some image link edits that can be reverted by a bot:
File:Http://
can be summarily deleted. I think there is a bot that does it but it may be a bit slow off the mark or it needs to be run manually?C:\
can also be immediately deleted. These are added by newbies who think they can link to the filepath in their local machine.File:Name
, is another one for deletion (less common).
http://commons.wikimedia.org/wiki/
should be changed to link directly to the Commons image (even less common)In all cases the bot should add a talk page message explaining the reason for the link deletion (similar to the one left by User:DPL bot). The bot should only work on edits which are solely the addition of a file link. If an edit is the addition of a red linked file AND other stuff a talk page message should be left and no change made to the article.
I have seen bots do some clever things so I think what I am suggesting is possible. Another way is to activate pending changes/flagged revisions but that is another story for another place. -- Alan Liefting ( talk - contribs) 21:02, 15 January 2013 (UTC)
Here are some examples: 1, 2, 3, 4. There are more in my edit history. -- Alan Liefting ( talk - contribs) 21:36, 15 January 2013 (UTC)
A file has been added -> does it exist on WP or Commons? -> no -> is the edit only file related? -> yes -> revert edit/remove file.
Please change domain name WARFARE.RU in all links to WARFARE.BE . WARFARE.RU was censored and moved to new domain WARFARE.BE . Old domain and links are not working. Actually it should be done across all Wiki languages - there are thousands of pages. Thanks. -- 188.191.19.243 ( talk) 10:22, 20 January 2013 (UTC)
Sometimes infoboxes are added with the piping ( vertical bar) added at the end of a line rather than at the start. It is a pain to fix and I am wondering if a bot is able to do it. As an example here is the Berekum Arsenal article infobox:
{{Football club infobox | clubname = Berekum Arsenal FC| current = 2012–13 Ghanaian Premier League | image = | fullname = Berekum Arsenal Football Club| nickname = | founded = 1978 | ground = [[Berekum Sports Stadium]],<br />[[Berekum]], [[Ghana]] | capacity = 5,000 | chairman = Alhaji Yakubu Moro| manager = Ebo Mends | league = [[Ghana Telecom Premier League]] | season = 2009/10 | position = | pattern_la1=|pattern_b1=|pattern_ra1=| leftarm1=FFFFFF|body1=DD0000|rightarm1=FFFFFF|shorts1=FFFFFF|socks1=DD0000| pattern_la2=|pattern_b2=_unknown|pattern_ra2=| leftarm2=FFFFFF|body2=FFFFFF|rightarm2=FFFFFF|shorts2=FFFFFF|socks2=FFFFFF| }}
To me it seems simple enough for a bot to fix. It would simply be a matter of detecting the pipe followed by a CR (or LF?) and transposing the two. It is not a big issue but it makes it easier to update infoboxes, especially for newbies. -- Alan Liefting ( talk - contribs) 22:24, 20 January 2013 (UTC)
| parameter =
". Finally, I notice in the example, there are some lines in the template with more than one parameter, like leftarm1=FFFFFF|body1=DD0000|rightarm1=FFFFFF|shorts1=FFFFFF|socks1=DD0000|
. Could the bot fix these at the same time and put one parameter on its own line. Nothing like good old scope creep, eh? Regards,
Illia Connell (
talk) 04:34, 23 January 2013 (UTC) leftarm1=FFFFFF|body1=DD0000|rightarm1=FFFFFF|shorts1=FFFFFF|socks1=DD0000|
(which are groups for good reason) a bot could still make it a cleaner layout.--
Alan Liefting (
talk -
contribs) 19:13, 23 January 2013 (UTC){{Football club infobox | clubname = Berekum Arsenal FC | current = 2012–13 Ghanaian Premier League | image = | fullname = Berekum Arsenal Football Club | nickname = | founded = 1978 | ground = [[Berekum Sports Stadium]],<br />[[Berekum]], [[Ghana]] | capacity = 5,000 | chairman = Alhaji Yakubu Moro | manager = Ebo Mends | league = [[Ghana Telecom Premier League]] | season = 2009/10 | position = | pattern_la1=|pattern_b1=|pattern_ra1= | leftarm1=FFFFFF|body1=DD0000|rightarm1=FFFFFF|shorts1=FFFFFF|socks1=DD0000 | pattern_la2=|pattern_b2=_unknown|pattern_ra2= | leftarm2=FFFFFF|body2=FFFFFF|rightarm2=FFFFFF|shorts2=FFFFFF|socks2=FFFFFF| }}
Werieth ( talk) 19:32, 23 January 2013 (UTC)
Just checked the article Wikiquote, and it looks like the last update for the Alexa rank by User:OKBot was on 2 August 2012. This bot is listed as active; however, the last updates for any articles were on 9 September 2012. Other editors have tried to contact the bot owner on 11 December 2012, but the request was archived without a response from the owner. This bot would be very good, if it was working. Maybe someone else can take over the bot? -- Funandtrvl ( talk) 19:19, 23 January 2013 (UTC)
In September I got tired of fighting a losing battle and stopped supporting WikiProject United States. Since then the project has basically gone inactive and so have many of the supported ones as well. As such some projects want to break back out on their own again (So far WikiProject Kansas and Suny but more will come). What is needed now is a bot operator who can help convert the banner from the WPUS format into the individual project. It should be a fairly simple task but doing it by bot would be best and quicker than doing them all manually. Any takers? Kumioko ( talk) 01:47, 25 January 2013 (UTC)
Sorry, can pywikipediabot automatically add templates or other text after reflinks on certain sites? For example: [1], [2]. Thanks.-- Ворота рая Импресариата ( talk) 11:05, 20 January 2013 (UTC)
There are quite a few articles where there are just a few references, yet because of copy/pasting reference sections (headers/templates) they use the multi column format which just looks really bad. An idea would be for a bot to go through and convert the multi-column's to the basic {{reflist}} for all articles with say less than 10 unique references. Werieth ( talk) 14:41, 25 January 2013 (UTC)
Please substitute " http://198.62.75.1/" with " http://www.christusrex.org/". Visite fortuitement prolongée ( talk) 21:46, 15 January 2013 (UTC)
Browsing and I notice a lot of bare urls which look scruffy and make it looks as if the article is lacking TLC (which it more often than not is but that's not the point!). I was wondering if somebody could code a bot to a] search all wikipedia entries for references with bare urls. b] To apply Template:Bare URLs to them, and then to format the ref in citation templates. Same goes for references which only name title not publisher and date of publication data and apply Template:Expand ref and do the same thing. I think it could prove very valuable for improving format and consistency on wikipedia.♦ Dr. ☠ Blofeld 23:11, 25 January 2013 (UTC)
|publisher=
instead of |work=
. Rjwilmsi's
CiteCompletion bot is carefully setup to correctly handle a small number of sites. Because of the wide variations in web site setups, I agree that this is the appropriate approach for automated edits. The maintainers of Reflinks don't seem to be doing this, so it requires human review and correction before saving each edit.
GoingBatty (
talk) 19:18, 27 January 2013 (UTC)Could anyone help in cleaning Category:Infobox football biography image param needs updating? -- Magioladitis ( talk) 00:07, 27 January 2013 (UTC)
|caption=
, the alt text to |alt=
, the image size to |image_size=
and the filename to be be striped out. All other persons infoboxes use this system. --
Magioladitis (
talk) 07:31, 27 January 2013 (UTC)See edits 1, 2, and 3 to Pelusium — in November 2011 someone tagged a link as needing disambiguation; someone fixed it within hours; and I removed it a few minutes ago, more than a year after the issue was fixed. Do we have a bot that's supposed to go around and remove these tags when the link has been fixed? I doubt that such a bot would have difficulties; while most inline templates are context-based, like {{ cn}} or {{ who}}, every link either goes to a disambiguation page or doesn't. I suspect that a bot would easily be able to go everywhere that this template exists, check the link in the text before each transclusion, and remove the template from pages where this link doesn't go to a disambiguation page. Nyttend ( talk) 04:09, 28 January 2013 (UTC)
I'm putting this back out here on this page. Rich Farmborough thought he could take care of it. Rich thought it was only about 120 articles involved, but it's literally thousands. The Texas Project has about 30,000 articles with the project banner. No way to know how many other Texas articles are there without the project banner. Countless numbers of those use the Handbook of Texas template in references. It is not unusual for an article to use that template multiple times in one article. .Too much to do manually, and Rich could not get permission to run a bot. Now Rich has been blocked (unrelated to this) from editing for two months. — Maile ( talk) 00:31, 22 January 2013 (UTC)
Per Talk page conversation Magioladitis. We need a bot.to correct coding on existing articles that contain the Template:Handbook of Texas. The handbook changed its URLs. User Magioladitis has changed the template so it works with all new uses. But we need to run a bot to correct how existing templates were coded in the template section "id="
Therefore, we need a bot that makes these changes to the coding on "id=":
There are possibly hundreds or thousands of Texas articles affected. Please let me know if you need additional explanation. — Maile ( talk) 15:16, 23 December 2012 (UTC)
Is there a Bot that can find dead links and automatically find an archived link for them?-- Astros4477 ( Talk) 04:27, 26 January 2013 (UTC)
Special:Categories contains a number (probably dozens or scores, maybe hundreds) of non-existent categories with one article. It appears (to me at least) that these cats were added to articles either by mistake or in the hope that the cat would one day be created. It's unlikely that some of these will ever be created and the articles should be removed from the cat.
Identify from Special:Categories all non-existent categories (i.e., red-links) and remove the category from the articles therein. Run this bot occasionally, perhaps once a week.
[[Category
to [[:Category
)Regards, Illia Connell ( talk) 04:19, 23 January 2013 (UTC)
The top division of Korean professional football league's official name was changed to K League Classic from K-League.(source: the-afc.com) So, some categories related K-League must be changed too. I request to move all articles in Category:K-League players category to Category:K League Classic players. z4617925 ( talk) 13:21, 30 January 2013 (UTC)
I would like to request the creation of a bot for detection of violations of NFCC Policy 10c. The bot should parse through all pages in File namespace and check whether it has a non-free copyright tag. If that's the case, then the bot should check, whether for each file use in article namespace there is a non-free use rationale (this should also include non-template based rationales, in that case it should at least be checked, whether that rationale mentions the article name). If that is not the case for a specific use, the bot should do the following:
The tagging of the file page will place that file page in a maintenance category for human editors to check. -- Toshio Yamaguchi 16:35, 22 January 2013 (UTC)
Note: I also notified the watchers of Wikipedia talk:Non-free content here and the wider community here. -- Toshio Yamaguchi 16:44, 22 January 2013 (UTC)
I haven't made up my mind yet about whether a bot run would be useful, but I have concerns about the wording of the notification templates. They are too soft. "Please add a valid non-free use rationale if possible [...] or discuss the issues at WP:NFCR" points the reader in the wrong direction. In 90% of all cases, at my rough estimation, the correct outcome will not be adding of a FUR, but removal of the image. We don't want to push editors to just mechanically add bad boilerplate FURs to cover up bad usages. We also don't want to spread the myth that you cannot remove a bad non-free image without prior discussion (whereas adding one without prior discussion is okay). My suggestion for the notification would be: "Please consider if the use of the file in these articles can be justified under our policy criteria. If yes, please add an appropriate non-free use rationale explaining how and why it is justified. If not, please remove it from the article. If in doubt, start a discussion at WP:NFCR." Fut.Perf. ☼ 18:43, 22 January 2013 (UTC)
Please let me know if there is anything else that needs to be addressed regarding this request. Are the templates okay now? -- Toshio Yamaguchi 19:09, 26 January 2013 (UTC)
Not a specific request as such, but this is an invitation for bot creators to head over to
Wikipedia talk:WikiProject UK geography now that we have the main data release for the
United Kingdom Census 2011 and we have to figure the best way to incorporate it into Wikipedia.
Wikipedia talk:WikiProject India is also going to have a similar problem, their census website is down at the moment which may be because they're about to launch their main lot of data for the
2011 census of India. The UK already has templates such as {{
English district population}} in an attempt to use templates to pull demographic data from a central source and I seem to remember there was some kind of project going on for this kind of centralised data on the Toolserver, but I can't recall what it was called.
I suspect India would involve starting from scratch - and from experience there's going to be a lot of cleanup needed before any bots go near it, a lot of census districts either don't exist or - the bane of Indian geography - they do exist but under different names. It's not just the simple "updating" of colonial names either, like Bombay -> Mumbai. In some cases there are genuinely several different spellings been used throughout history, and in other cases you've got spellings being used as part of an attempt to push a particular ethnic POV.
FlagSteward (
talk) 15:27, 31 January 2013 (UTC)
Hello. The Romanian alphabet includes the letters Șș (S with comma) and Țț (T with comma). Before Unicode 3 was released and got common, these letters could hardly or even impossibly be typed on web sites, so people got used to the workaround of using Şş (S with cedille) and Ţţ (T with cedille) instead. Nowadays, this workaround is no longer needed: Romanian wikipedia has corrected them all, German wikipedia followed and – with some of my contribution – has finished meanwhile. Could someone make a bot on en:wp subsitute every Ş by Ș, every ş by ș, every Ţ by Ț and every ţ by ț in lemmas within the Romanian geography categories, and do the same substitutions within the article text? Thank you very much! -- JøMa ( talk) 12:34, 30 January 2013 (UTC)
The latter has replaced the former as a more descriptive file name and on the Commons, but there are many pages which still point to the old file. Most of them are transcluded through the {{ Flagicon}} template which has already been fixed, but it would be useful to have a bot go through and replace all the ones that link directly to the file. Sycamore ( talk) 18:29, 1 February 2013 (UTC)
At Cfd January 24 there was a consensus to delete {{ HarzMountain-geo-stub}} and its associated Category:Harz Mountain geography stubs. I closed the discussion, but while the bots at WP:CFD/W can delete the category, they don't know to orphan a template.
Please can some kind bot-owner orphan Template:HarzMountain-geo-stub (i.e remove all uses of it)?
Thanks! -- BrownHairedGirl (talk) • ( contribs) 20:07, 1 February 2013 (UTC)
Could anyone help in cleaning Category:Infobox cricketer using deprecated parameters? -- Magioladitis ( talk) 01:05, 27 January 2013 (UTC)
|playername=
needs to be renamed to |name=
" and "|imagealt=
needs to be renamed to |alt=
" have been done already by
Yobot. --
Cheers,
Ril
ey 01:58, 27 January 2013 (UTC)|living=
and |partialdates=
parameters, if present, must be removed since their presence will cause obsolete parameters like |yearofbirth=
to be processed (even if the new parameters like |birth_date=
are provided) and generate an error.|death_date=
, then |birth_date=
should be given {{
birth date and age}}
instead of {{
birth date}}
.There are 7,000 blue links on Wikipedia:WikiProject Languages/Primary language names in Ethnologue 16 by ISO code. I would like a bot check to verify which are obviously correct; I will then check any exceptions manually.
The parameter is whether the ISO code at the target article matches what we have on the list. For example, the first link on the list page is [[Ghotuo language|aaa]]. The article Ghotuo language has a language infobox with the parameter "iso3" set equal to aaa, so that link is good.
The potentially matching parameters in {{ Infobox language}} are iso3, lc1, lc2, lc3, .... (lc-n is used where there is more than one ISO code.)
I would like a list of any links, direct or redirected, which do not match. I suspect there are a fair number of circular links which need to be fixed. It would be nice if both pieces of data could be returned. So, if the example link were bad, we'd get back "Ghotuo language : aaa" or something similar.
Is that possible?
Thanks, — kwami ( talk) 06:42, 3 February 2013 (UTC)
I have a very tedious job that I would love to have automated. The current table structure over at Wikipedia:Today's article for improvement/Nominated articles is going to be converted into this structure, using a template. Conveniently, each row in the table is labeled identically with comments, and the template is labelled with identifier tags. The details of what I'd like to convert can be found here, but the jist of it is that I'd like a bot to convert all of the tables into templates on the Nominations page, as well as the Holding Area and Archives. -- Nick Penguin( contribs) 03:29, 5 February 2013 (UTC)
Where is the discussion area about bot technology? I'm interested in learning what bots are and are not capable of.
For example, when articles are split and a summary is left in the original article, both the new article and the old section that it used to be receive development from editors after the split, creating a fork situation.
New material is added by some editors to the new article, but new material also gets added to the original section in the old article. Not the same material.
There are tens of thousands of instances of the summary style being used, and a great many of them have resulted in forked content, with new content being added to the summary and not to the main article.
Could a bot be created that could sync up article sections with the corresponding {{ Main}} articles?
That is, identify — in the section — material that is not included in the main article, and copy or move that material to the main article?
Is natural language processing sufficiently sophisticated to handle this?
What tools are available (anywhere in the computing world) that would be useful for this? The Transhumanist 02:33, 8 February 2013 (UTC)
To what extent can bot technology be used for subject consolidation?
For example, is this doable: gather all the mentions of a particular individual from everywhere on Wikipedia and dump it all in a project page for evaluation by human editors?
Or identify and gather everything about the subject "natural language processing" regardless of what articles it appears in?
Sometimes, details of a subject are added to a less relevant article. For example, details about an organization in the biographical article on its founder. Those details may be more relevant to the organization article.
How can bots help find and gather material on Wikipedia about a subject that is somewhere other than the article on that subject? The Transhumanist 02:33, 8 February 2013 (UTC)
This would probably be considered a form of multi-document summarization.
Sometimes editors skip a subject and directly edit the article on a subtopic. There are many instances of subjects that are missing a subheading for a subtopic that has an article on Wikipedia.
Is it theoretically possible to write a bot that could analyze an article, and Wikipedia with respect to that article's subject, to identify missing subtopics that have their own articles?
It should be an easy matter to check for matching article titles once the subtopics have been determined.
To build a section, all you'd have to do is copy the the lead (or lead paragraph) of the subtopic article.
But how would the bot determine the names of the subtopics that are missing from a subject's article? The Transhumanist 07:44, 8 February 2013 (UTC)
Easier than the above described bot would be one that looked for subheadings the total content of each were a {{ Main}} template.
That is, the section has a "Main article" hatnote, but is otherwise empty. It's missing a summary style summary.
The bot would simply insert into the empty section a copy of the lead paragraph from the article specified in the Main template.
What problems am I overlooking here? The Transhumanist 07:52, 8 February 2013 (UTC)
P.S.: How would a bot go about finding such sections? -TT
Ambassadors from one country of the Commonwealth of Nations to another are called High Commissioners (though their responsibilities are that of an ambassador). Accordingly, there's no Category:Ambassadors of the United Kingdom to Canada but rather Category:High Commissioners of the United Kingdom to Canada. This category is included in the parent category Category:High Commissioners of the United Kingdom which is itself included in Category:Ambassadors of the United Kingdom. In a recent CfD about the ambassador categories of the United Kingdom, it was decided (among other things) that categories of the form Category:High Commissioners of the United Kingdom to Canada should be subcategories of both Category:High Commissioners of the United Kingdom and Category:Ambassadors of the United Kingdom. It's natural to extend this solution to every country of the Commonwealth and nobody objected when I posted this proposal at Wikipedia talk:WikiProject International relations a couple of weeks ago.
So here's the bot task.
Thanks, Pichpich ( talk) 16:28, 8 February 2013 (UTC)
If possible, I like to request a bot run to correct/change the following edits. I have written more than 160 articles about Michelin starred restaurants and it would eat up a shocking amount of time to figure out where the outdated sources are placed and correct them. The last edit is a link fix, because I am sick of all the people saying that it is POV to name a Michelin starred restaurant a "quality restaurant", although they are judged on the quality of their food. The Banner talk 15:02, 12 January 2013 (UTC) If I am at the wrong place, sorry. Please move it to the right place. This is out of my comfort zone.
Extended content
|
---|
|
Is this a difficult request or the wrong place? The Banner talk 12:47, 5 February 2013 (UTC)
{{
nl}}
before the {{
cite news}} template and ends with "Historical Overview Dutch Michelin stars xxxx"
after the {{
cite news}} template. Would it be better to use the parameters |language=Dutch
and |trans_title=Historical Overview Dutch Michelin stars xxxx
within the {{
cite news}} template instead? Thanks!
GoingBatty (
talk) 17:53, 5 February 2013 (UTC)
Per consensus at Template_talk:Infobox_company#Replacing_full_stop_with_comma_at_the_num_employees_field and collapsed discussion below, there is a request to replace a full stop (.) as a thousands separation with a comma (,) at the num_employees fields in the companies infoboxes. Rationale and more detailed description is provided below. It also provides information which forums were notified about discussion. Beagel ( talk) 19:52, 1 February 2013 (UTC)
Extended content
|
---|
The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion. The following discussion is migrated from the company's infobox talk page. Beagel ( talk) 18:10, 1 February 2013 (UTC) A lot of current num_employees fields in the companies infoboxes use a full stop (.) to separate thousands (e.g. 12.200, 5.200) instead of using a comma (,). This is confusing as a full stop (.) usually means the decimal point and this is also violates WP:MOSNUM. E.g. the infobox at Minerva S.A. states that "num_employees = 7.000". It would be replaced by "num_employees = 7,000". Most of these (but not only) was introduced by blocked
User:Edson Rosa and his sockpuppets. As the number is a large (mainly concerning Brazilian companies) and they are hard to detect manually, I propose to use some bot for this task. If the proposal achieves consensus, it applies only to I notified also WP:COMPANIES and WP:MOSNUM about this proposal. Beagel ( talk) 18:12, 28 January 2013 (UTC) Comments
Support
OpposeThe discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
Why this discussion was closed just few minutes after filing as it was specifically about requesting a bot for a certain task? Where it should be discussed if not here? Beagel ( talk) 18:37, 1 February 2013 (UTC)
|
Is it possible to get a bot to place an unreferenced article tag on all articles without a single instance of "'''<ref>'''"? This would speed up the categorisation and reduce human editors workloads. Thanks ツ Jenova 20 ( email) 10:03, 8 February 2013 (UTC)
Doing this with a bot would incorrectly tag articles that use external links instead of inline references. Of course, these external links should ideally be replaced by inline citations but that's a separate issue. Pichpich ( talk) 16:10, 8 February 2013 (UTC)
<ref>...</ref>
, {{
sfn}}
or any other methods now known or to be invented. --
Redrose64 (
talk) 19:57, 12 February 2013 (UTC)Template bombing is a bit of a problem here, it disfigures articles and can hide the more important templates amidst a clutter of ones that are more about maintenance. There are two things that would reduce this problem:
Please can we have a bot to find and cull duplicate templates. Where their dates differ keep the earlier template. Ϣere SpielChequers 16:31, 8 February 2013 (UTC)
Please can we have a bot to find articles with more than three templates and ensure they are in the {{ multiple issues}} template. Currently we have some articles with lots of templates , and worse, ones where the most important templates are in a {{ multiple issues}} template and less serious ones are given far greater prominence by being a large separate template. Ϣere SpielChequers 16:31, 8 February 2013 (UTC)
I have added the above tasks to my BRFA here and the code is now written. Just have to wait for another trial for testing and bug fixing. If anyone can find any specific articles I could test the bot on that would be great as it is quite hard to find articles with duplicate tags on them. Cheers ·Add§hore· Talk To Me! 20:37, 8 February 2013 (UTC)
I have nominated 158 categories for merger at Wikipedia:Categories for discussion/Log/2013 February 13#States_and_territories_disestablished_before_1000CE.
This too many to tag manually, so I have made a list at
User:BrownHairedGirl/List of states and territories disestablished before 1000AD categories. Please can a bot tag all 158 of these categories with ? The tag should be at the top of the page.
{{subst:cfm|2=States_and_territories_disestablished_before_1000CE}}
Thanks. -- BrownHairedGirl (talk) • ( contribs) 18:23, 13 February 2013 (UTC)
<!--BEGIN CFD TEMPLATE--> <!-- Please do not remove or change this [[Template:Cfm]] message until the survey and discussion at [[WP:Cfd]] is closed --> {{Cfm full|day=13|month=February|year=2013|1=States_and_territories_disestablished_before_1000CE|target=}} <!-- End of Cfm message, feel free to edit beyond this point. --> <!--END CFD TEMPLATE-->
The first column of both tables at List of round barns contains a numbered list. This numbered list means nothing and also forces a change to every subsequent row if someone wants to add a row in. Is there a quick and easy way to completely remove the number column/entries? Ryan Vesey 19:35, 13 February 2013 (UTC)
If someone wants to work for bot-creation, there is a lot of creative work on Sanskrit wikipedia (sa.wikipedia.org) waiting. Those who want to volunteer can contact me via e-mail hmt[dot]seeit[at]gmail.com . Thanks a lot. - Hemant wikikosh ( talk) 13:41, 14 February 2013 (UTC)
Hi. I would like to make a request to remove interwiki links from articles that already have interwiki links provided by Wikidata. Thanks, Lord Sjones23 ( talk - contributions) 16:43, 15 February 2013 (UTC)
Category:Orissa articles missing geocoordinate data was recently renamed to Category:Odisha articles missing geocoordinate data through WP:CFDS. The trouble is, this category is populated by {{ coord missing}} - and with 287 articles that need to be changed, that's a small bit of drudgery to be gone though. Would it be possible to get a bot to search the contents of Category:Orissa articles missing geocoordinate data for "{{coord missing|Orissa}}" and replace all instances found with "{{coord missing|Odisha}}"? Thanks! - The Bushranger One ping only 07:44, 16 February 2013 (UTC)
I request a bot to automatically change all instances of "and/or" in the main namespace with "or", per WP:ANDOR, excluding any articles under Category:English grammar or one of its subcategories. ❤ Yutsi Talk/ Contributions ( 偉特 ) 04:20, 17 February 2013 (UTC)
At the moment, Wikivoyage is opt-in in {{ sister project links}}, while all other WMF projects (except Wikidata and Wikispecies, which are special) are opt-out. This doesn't make sense and should be changed. To do so, however, we would need a bot to go through all transclusions of {{ sister project links}} and add a "voy=no" parameter, except on articles that are about a town, state, country, etc. It would be fairly simple to use something like {{ infobox settlement}} as a guide, but it may not be very accurate. (I'm not sure if it is necessary to check for a corresponding page on Wikivoyage, as some of the search results on other pages may be useful as well.) — This, that and the other (talk) 10:22, 7 February 2013 (UTC)
Is it possible to scan our language articles to check that they link to the correct ISO codes?
There are 7,500 blue links on Wikipedia:WikiProject Languages/Primary language names in Ethnologue 16 by ISO code. If I could get a bot check to verify which are obviously correct, I could check any exceptions manually.
The parameter is whether the ISO code at the target article matches what we have on the ISO list. For example, the first link on the list page is [[Ghotuo language|aaa]]. The article Ghotuo language has a language infobox with the parameter "iso3" set equal to aaa, so that link is good. That should be easy to check by bot, assuming it can follow redirects.
The potentially matching parameters in {{ Infobox language}} are iso3, lc1, lc2, lc3, .... (lc-n is used where there is more than one ISO code.)
I'm hoping for a list of any language names which do not link to the matching ISO code. I suspect there are a fair number of circular links which need to be fixed. It would be nice if both pieces of data could be returned. So, if the example link were bad, we'd get back "Ghotuo language : aaa" or something similar.
Is that possible?
Thanks, — kwami ( talk) 06:42, 3 February 2013 (UTC)
I am trying to reactive WikiProject Dutch municipalities. One of the steps would be to assess all relevant articles, which are the articles on the municipalities and the articles about their subdivisions. All these articles are already tagged with the {{ WikiProject Netherlands}} banner as far as I can tell. These would need to be updated with |muni=yes |muni-importance=Low/Mid.
All municipalities would have to be automatically tagged as Mid and their subdivisions as Low importance. For each of 12 provinces there are lists of Cities, towns and villages and categories with all the municipalities. All articles occurring in both would need a Mid-importance tag, the articles only occurring in the former a Low-importance tag. Both would need the |muni=yes as well.
I am not fully familiar with the wikipedia bot process, but I assume this would be possible to do automatically. Could any of you advice me how to proceed with this? CRwikiCA ( talk) 20:22, 15 February 2013 (UTC)
This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 45 | ← | Archive 50 | Archive 51 | Archive 52 | Archive 53 | Archive 54 | Archive 55 |
Is it possible to automatically remove the {{Wikivoyage-inline}}
templates in the external links sections of (all?) articles on geographical objects and instead switch the option voy=PAGETITLE
(or so) on in the box with the sister projects links, now that Wikivoyage is officially a sister project? --
Florian Blaschke (
talk) 18:49, 16 January 2013 (UTC)
Please change Category:Yokohama Flugels players to Category:Yokohama Flügels players like this. This football club exact name was Yokohama Flügels, not Yokohama Flugels. Thanks. -- Japan Football ( talk) 14:16, 20 January 2013 (UTC)
I am trying to clear the backlog at Category:Articles with missing files but it is proving to be difficult and it is mind-numbing and there are better things to do. New pages are continually coming in and I suspect that those who monitor recent changes are not picking up the problematic edits because images do not show on page diffs. There are some image link edits that can be reverted by a bot:
File:Http://
can be summarily deleted. I think there is a bot that does it but it may be a bit slow off the mark or it needs to be run manually?C:\
can also be immediately deleted. These are added by newbies who think they can link to the filepath in their local machine.File:Name
, is another one for deletion (less common).
http://commons.wikimedia.org/wiki/
should be changed to link directly to the Commons image (even less common)In all cases the bot should add a talk page message explaining the reason for the link deletion (similar to the one left by User:DPL bot). The bot should only work on edits which are solely the addition of a file link. If an edit is the addition of a red linked file AND other stuff a talk page message should be left and no change made to the article.
I have seen bots do some clever things so I think what I am suggesting is possible. Another way is to activate pending changes/flagged revisions but that is another story for another place. -- Alan Liefting ( talk - contribs) 21:02, 15 January 2013 (UTC)
Here are some examples: 1, 2, 3, 4. There are more in my edit history. -- Alan Liefting ( talk - contribs) 21:36, 15 January 2013 (UTC)
A file has been added -> does it exist on WP or Commons? -> no -> is the edit only file related? -> yes -> revert edit/remove file.
Please change domain name WARFARE.RU in all links to WARFARE.BE . WARFARE.RU was censored and moved to new domain WARFARE.BE . Old domain and links are not working. Actually it should be done across all Wiki languages - there are thousands of pages. Thanks. -- 188.191.19.243 ( talk) 10:22, 20 January 2013 (UTC)
Sometimes infoboxes are added with the piping ( vertical bar) added at the end of a line rather than at the start. It is a pain to fix and I am wondering if a bot is able to do it. As an example here is the Berekum Arsenal article infobox:
{{Football club infobox | clubname = Berekum Arsenal FC| current = 2012–13 Ghanaian Premier League | image = | fullname = Berekum Arsenal Football Club| nickname = | founded = 1978 | ground = [[Berekum Sports Stadium]],<br />[[Berekum]], [[Ghana]] | capacity = 5,000 | chairman = Alhaji Yakubu Moro| manager = Ebo Mends | league = [[Ghana Telecom Premier League]] | season = 2009/10 | position = | pattern_la1=|pattern_b1=|pattern_ra1=| leftarm1=FFFFFF|body1=DD0000|rightarm1=FFFFFF|shorts1=FFFFFF|socks1=DD0000| pattern_la2=|pattern_b2=_unknown|pattern_ra2=| leftarm2=FFFFFF|body2=FFFFFF|rightarm2=FFFFFF|shorts2=FFFFFF|socks2=FFFFFF| }}
To me it seems simple enough for a bot to fix. It would simply be a matter of detecting the pipe followed by a CR (or LF?) and transposing the two. It is not a big issue but it makes it easier to update infoboxes, especially for newbies. -- Alan Liefting ( talk - contribs) 22:24, 20 January 2013 (UTC)
| parameter =
". Finally, I notice in the example, there are some lines in the template with more than one parameter, like leftarm1=FFFFFF|body1=DD0000|rightarm1=FFFFFF|shorts1=FFFFFF|socks1=DD0000|
. Could the bot fix these at the same time and put one parameter on its own line. Nothing like good old scope creep, eh? Regards,
Illia Connell (
talk) 04:34, 23 January 2013 (UTC) leftarm1=FFFFFF|body1=DD0000|rightarm1=FFFFFF|shorts1=FFFFFF|socks1=DD0000|
(which are groups for good reason) a bot could still make it a cleaner layout.--
Alan Liefting (
talk -
contribs) 19:13, 23 January 2013 (UTC){{Football club infobox | clubname = Berekum Arsenal FC | current = 2012–13 Ghanaian Premier League | image = | fullname = Berekum Arsenal Football Club | nickname = | founded = 1978 | ground = [[Berekum Sports Stadium]],<br />[[Berekum]], [[Ghana]] | capacity = 5,000 | chairman = Alhaji Yakubu Moro | manager = Ebo Mends | league = [[Ghana Telecom Premier League]] | season = 2009/10 | position = | pattern_la1=|pattern_b1=|pattern_ra1= | leftarm1=FFFFFF|body1=DD0000|rightarm1=FFFFFF|shorts1=FFFFFF|socks1=DD0000 | pattern_la2=|pattern_b2=_unknown|pattern_ra2= | leftarm2=FFFFFF|body2=FFFFFF|rightarm2=FFFFFF|shorts2=FFFFFF|socks2=FFFFFF| }}
Werieth ( talk) 19:32, 23 January 2013 (UTC)
Just checked the article Wikiquote, and it looks like the last update for the Alexa rank by User:OKBot was on 2 August 2012. This bot is listed as active; however, the last updates for any articles were on 9 September 2012. Other editors have tried to contact the bot owner on 11 December 2012, but the request was archived without a response from the owner. This bot would be very good, if it was working. Maybe someone else can take over the bot? -- Funandtrvl ( talk) 19:19, 23 January 2013 (UTC)
In September I got tired of fighting a losing battle and stopped supporting WikiProject United States. Since then the project has basically gone inactive and so have many of the supported ones as well. As such some projects want to break back out on their own again (So far WikiProject Kansas and Suny but more will come). What is needed now is a bot operator who can help convert the banner from the WPUS format into the individual project. It should be a fairly simple task but doing it by bot would be best and quicker than doing them all manually. Any takers? Kumioko ( talk) 01:47, 25 January 2013 (UTC)
Sorry, can pywikipediabot automatically add templates or other text after reflinks on certain sites? For example: [1], [2]. Thanks.-- Ворота рая Импресариата ( talk) 11:05, 20 January 2013 (UTC)
There are quite a few articles where there are just a few references, yet because of copy/pasting reference sections (headers/templates) they use the multi column format which just looks really bad. An idea would be for a bot to go through and convert the multi-column's to the basic {{reflist}} for all articles with say less than 10 unique references. Werieth ( talk) 14:41, 25 January 2013 (UTC)
Please substitute " http://198.62.75.1/" with " http://www.christusrex.org/". Visite fortuitement prolongée ( talk) 21:46, 15 January 2013 (UTC)
Browsing and I notice a lot of bare urls which look scruffy and make it looks as if the article is lacking TLC (which it more often than not is but that's not the point!). I was wondering if somebody could code a bot to a] search all wikipedia entries for references with bare urls. b] To apply Template:Bare URLs to them, and then to format the ref in citation templates. Same goes for references which only name title not publisher and date of publication data and apply Template:Expand ref and do the same thing. I think it could prove very valuable for improving format and consistency on wikipedia.♦ Dr. ☠ Blofeld 23:11, 25 January 2013 (UTC)
|publisher=
instead of |work=
. Rjwilmsi's
CiteCompletion bot is carefully setup to correctly handle a small number of sites. Because of the wide variations in web site setups, I agree that this is the appropriate approach for automated edits. The maintainers of Reflinks don't seem to be doing this, so it requires human review and correction before saving each edit.
GoingBatty (
talk) 19:18, 27 January 2013 (UTC)Could anyone help in cleaning Category:Infobox football biography image param needs updating? -- Magioladitis ( talk) 00:07, 27 January 2013 (UTC)
|caption=
, the alt text to |alt=
, the image size to |image_size=
and the filename to be be striped out. All other persons infoboxes use this system. --
Magioladitis (
talk) 07:31, 27 January 2013 (UTC)See edits 1, 2, and 3 to Pelusium — in November 2011 someone tagged a link as needing disambiguation; someone fixed it within hours; and I removed it a few minutes ago, more than a year after the issue was fixed. Do we have a bot that's supposed to go around and remove these tags when the link has been fixed? I doubt that such a bot would have difficulties; while most inline templates are context-based, like {{ cn}} or {{ who}}, every link either goes to a disambiguation page or doesn't. I suspect that a bot would easily be able to go everywhere that this template exists, check the link in the text before each transclusion, and remove the template from pages where this link doesn't go to a disambiguation page. Nyttend ( talk) 04:09, 28 January 2013 (UTC)
I'm putting this back out here on this page. Rich Farmborough thought he could take care of it. Rich thought it was only about 120 articles involved, but it's literally thousands. The Texas Project has about 30,000 articles with the project banner. No way to know how many other Texas articles are there without the project banner. Countless numbers of those use the Handbook of Texas template in references. It is not unusual for an article to use that template multiple times in one article. .Too much to do manually, and Rich could not get permission to run a bot. Now Rich has been blocked (unrelated to this) from editing for two months. — Maile ( talk) 00:31, 22 January 2013 (UTC)
Per Talk page conversation Magioladitis. We need a bot.to correct coding on existing articles that contain the Template:Handbook of Texas. The handbook changed its URLs. User Magioladitis has changed the template so it works with all new uses. But we need to run a bot to correct how existing templates were coded in the template section "id="
Therefore, we need a bot that makes these changes to the coding on "id=":
There are possibly hundreds or thousands of Texas articles affected. Please let me know if you need additional explanation. — Maile ( talk) 15:16, 23 December 2012 (UTC)
Is there a Bot that can find dead links and automatically find an archived link for them?-- Astros4477 ( Talk) 04:27, 26 January 2013 (UTC)
Special:Categories contains a number (probably dozens or scores, maybe hundreds) of non-existent categories with one article. It appears (to me at least) that these cats were added to articles either by mistake or in the hope that the cat would one day be created. It's unlikely that some of these will ever be created and the articles should be removed from the cat.
Identify from Special:Categories all non-existent categories (i.e., red-links) and remove the category from the articles therein. Run this bot occasionally, perhaps once a week.
[[Category
to [[:Category
)Regards, Illia Connell ( talk) 04:19, 23 January 2013 (UTC)
The top division of Korean professional football league's official name was changed to K League Classic from K-League.(source: the-afc.com) So, some categories related K-League must be changed too. I request to move all articles in Category:K-League players category to Category:K League Classic players. z4617925 ( talk) 13:21, 30 January 2013 (UTC)
I would like to request the creation of a bot for detection of violations of NFCC Policy 10c. The bot should parse through all pages in File namespace and check whether it has a non-free copyright tag. If that's the case, then the bot should check, whether for each file use in article namespace there is a non-free use rationale (this should also include non-template based rationales, in that case it should at least be checked, whether that rationale mentions the article name). If that is not the case for a specific use, the bot should do the following:
The tagging of the file page will place that file page in a maintenance category for human editors to check. -- Toshio Yamaguchi 16:35, 22 January 2013 (UTC)
Note: I also notified the watchers of Wikipedia talk:Non-free content here and the wider community here. -- Toshio Yamaguchi 16:44, 22 January 2013 (UTC)
I haven't made up my mind yet about whether a bot run would be useful, but I have concerns about the wording of the notification templates. They are too soft. "Please add a valid non-free use rationale if possible [...] or discuss the issues at WP:NFCR" points the reader in the wrong direction. In 90% of all cases, at my rough estimation, the correct outcome will not be adding of a FUR, but removal of the image. We don't want to push editors to just mechanically add bad boilerplate FURs to cover up bad usages. We also don't want to spread the myth that you cannot remove a bad non-free image without prior discussion (whereas adding one without prior discussion is okay). My suggestion for the notification would be: "Please consider if the use of the file in these articles can be justified under our policy criteria. If yes, please add an appropriate non-free use rationale explaining how and why it is justified. If not, please remove it from the article. If in doubt, start a discussion at WP:NFCR." Fut.Perf. ☼ 18:43, 22 January 2013 (UTC)
Please let me know if there is anything else that needs to be addressed regarding this request. Are the templates okay now? -- Toshio Yamaguchi 19:09, 26 January 2013 (UTC)
Not a specific request as such, but this is an invitation for bot creators to head over to
Wikipedia talk:WikiProject UK geography now that we have the main data release for the
United Kingdom Census 2011 and we have to figure the best way to incorporate it into Wikipedia.
Wikipedia talk:WikiProject India is also going to have a similar problem, their census website is down at the moment which may be because they're about to launch their main lot of data for the
2011 census of India. The UK already has templates such as {{
English district population}} in an attempt to use templates to pull demographic data from a central source and I seem to remember there was some kind of project going on for this kind of centralised data on the Toolserver, but I can't recall what it was called.
I suspect India would involve starting from scratch - and from experience there's going to be a lot of cleanup needed before any bots go near it, a lot of census districts either don't exist or - the bane of Indian geography - they do exist but under different names. It's not just the simple "updating" of colonial names either, like Bombay -> Mumbai. In some cases there are genuinely several different spellings been used throughout history, and in other cases you've got spellings being used as part of an attempt to push a particular ethnic POV.
FlagSteward (
talk) 15:27, 31 January 2013 (UTC)
Hello. The Romanian alphabet includes the letters Șș (S with comma) and Țț (T with comma). Before Unicode 3 was released and got common, these letters could hardly or even impossibly be typed on web sites, so people got used to the workaround of using Şş (S with cedille) and Ţţ (T with cedille) instead. Nowadays, this workaround is no longer needed: Romanian wikipedia has corrected them all, German wikipedia followed and – with some of my contribution – has finished meanwhile. Could someone make a bot on en:wp subsitute every Ş by Ș, every ş by ș, every Ţ by Ț and every ţ by ț in lemmas within the Romanian geography categories, and do the same substitutions within the article text? Thank you very much! -- JøMa ( talk) 12:34, 30 January 2013 (UTC)
The latter has replaced the former as a more descriptive file name and on the Commons, but there are many pages which still point to the old file. Most of them are transcluded through the {{ Flagicon}} template which has already been fixed, but it would be useful to have a bot go through and replace all the ones that link directly to the file. Sycamore ( talk) 18:29, 1 February 2013 (UTC)
At Cfd January 24 there was a consensus to delete {{ HarzMountain-geo-stub}} and its associated Category:Harz Mountain geography stubs. I closed the discussion, but while the bots at WP:CFD/W can delete the category, they don't know to orphan a template.
Please can some kind bot-owner orphan Template:HarzMountain-geo-stub (i.e remove all uses of it)?
Thanks! -- BrownHairedGirl (talk) • ( contribs) 20:07, 1 February 2013 (UTC)
Could anyone help in cleaning Category:Infobox cricketer using deprecated parameters? -- Magioladitis ( talk) 01:05, 27 January 2013 (UTC)
|playername=
needs to be renamed to |name=
" and "|imagealt=
needs to be renamed to |alt=
" have been done already by
Yobot. --
Cheers,
Ril
ey 01:58, 27 January 2013 (UTC)|living=
and |partialdates=
parameters, if present, must be removed since their presence will cause obsolete parameters like |yearofbirth=
to be processed (even if the new parameters like |birth_date=
are provided) and generate an error.|death_date=
, then |birth_date=
should be given {{
birth date and age}}
instead of {{
birth date}}
.There are 7,000 blue links on Wikipedia:WikiProject Languages/Primary language names in Ethnologue 16 by ISO code. I would like a bot check to verify which are obviously correct; I will then check any exceptions manually.
The parameter is whether the ISO code at the target article matches what we have on the list. For example, the first link on the list page is [[Ghotuo language|aaa]]. The article Ghotuo language has a language infobox with the parameter "iso3" set equal to aaa, so that link is good.
The potentially matching parameters in {{ Infobox language}} are iso3, lc1, lc2, lc3, .... (lc-n is used where there is more than one ISO code.)
I would like a list of any links, direct or redirected, which do not match. I suspect there are a fair number of circular links which need to be fixed. It would be nice if both pieces of data could be returned. So, if the example link were bad, we'd get back "Ghotuo language : aaa" or something similar.
Is that possible?
Thanks, — kwami ( talk) 06:42, 3 February 2013 (UTC)
I have a very tedious job that I would love to have automated. The current table structure over at Wikipedia:Today's article for improvement/Nominated articles is going to be converted into this structure, using a template. Conveniently, each row in the table is labeled identically with comments, and the template is labelled with identifier tags. The details of what I'd like to convert can be found here, but the jist of it is that I'd like a bot to convert all of the tables into templates on the Nominations page, as well as the Holding Area and Archives. -- Nick Penguin( contribs) 03:29, 5 February 2013 (UTC)
Where is the discussion area about bot technology? I'm interested in learning what bots are and are not capable of.
For example, when articles are split and a summary is left in the original article, both the new article and the old section that it used to be receive development from editors after the split, creating a fork situation.
New material is added by some editors to the new article, but new material also gets added to the original section in the old article. Not the same material.
There are tens of thousands of instances of the summary style being used, and a great many of them have resulted in forked content, with new content being added to the summary and not to the main article.
Could a bot be created that could sync up article sections with the corresponding {{ Main}} articles?
That is, identify — in the section — material that is not included in the main article, and copy or move that material to the main article?
Is natural language processing sufficiently sophisticated to handle this?
What tools are available (anywhere in the computing world) that would be useful for this? The Transhumanist 02:33, 8 February 2013 (UTC)
To what extent can bot technology be used for subject consolidation?
For example, is this doable: gather all the mentions of a particular individual from everywhere on Wikipedia and dump it all in a project page for evaluation by human editors?
Or identify and gather everything about the subject "natural language processing" regardless of what articles it appears in?
Sometimes, details of a subject are added to a less relevant article. For example, details about an organization in the biographical article on its founder. Those details may be more relevant to the organization article.
How can bots help find and gather material on Wikipedia about a subject that is somewhere other than the article on that subject? The Transhumanist 02:33, 8 February 2013 (UTC)
This would probably be considered a form of multi-document summarization.
Sometimes editors skip a subject and directly edit the article on a subtopic. There are many instances of subjects that are missing a subheading for a subtopic that has an article on Wikipedia.
Is it theoretically possible to write a bot that could analyze an article, and Wikipedia with respect to that article's subject, to identify missing subtopics that have their own articles?
It should be an easy matter to check for matching article titles once the subtopics have been determined.
To build a section, all you'd have to do is copy the the lead (or lead paragraph) of the subtopic article.
But how would the bot determine the names of the subtopics that are missing from a subject's article? The Transhumanist 07:44, 8 February 2013 (UTC)
Easier than the above described bot would be one that looked for subheadings the total content of each were a {{ Main}} template.
That is, the section has a "Main article" hatnote, but is otherwise empty. It's missing a summary style summary.
The bot would simply insert into the empty section a copy of the lead paragraph from the article specified in the Main template.
What problems am I overlooking here? The Transhumanist 07:52, 8 February 2013 (UTC)
P.S.: How would a bot go about finding such sections? -TT
Ambassadors from one country of the Commonwealth of Nations to another are called High Commissioners (though their responsibilities are that of an ambassador). Accordingly, there's no Category:Ambassadors of the United Kingdom to Canada but rather Category:High Commissioners of the United Kingdom to Canada. This category is included in the parent category Category:High Commissioners of the United Kingdom which is itself included in Category:Ambassadors of the United Kingdom. In a recent CfD about the ambassador categories of the United Kingdom, it was decided (among other things) that categories of the form Category:High Commissioners of the United Kingdom to Canada should be subcategories of both Category:High Commissioners of the United Kingdom and Category:Ambassadors of the United Kingdom. It's natural to extend this solution to every country of the Commonwealth and nobody objected when I posted this proposal at Wikipedia talk:WikiProject International relations a couple of weeks ago.
So here's the bot task.
Thanks, Pichpich ( talk) 16:28, 8 February 2013 (UTC)
If possible, I like to request a bot run to correct/change the following edits. I have written more than 160 articles about Michelin starred restaurants and it would eat up a shocking amount of time to figure out where the outdated sources are placed and correct them. The last edit is a link fix, because I am sick of all the people saying that it is POV to name a Michelin starred restaurant a "quality restaurant", although they are judged on the quality of their food. The Banner talk 15:02, 12 January 2013 (UTC) If I am at the wrong place, sorry. Please move it to the right place. This is out of my comfort zone.
Extended content
|
---|
|
Is this a difficult request or the wrong place? The Banner talk 12:47, 5 February 2013 (UTC)
{{
nl}}
before the {{
cite news}} template and ends with "Historical Overview Dutch Michelin stars xxxx"
after the {{
cite news}} template. Would it be better to use the parameters |language=Dutch
and |trans_title=Historical Overview Dutch Michelin stars xxxx
within the {{
cite news}} template instead? Thanks!
GoingBatty (
talk) 17:53, 5 February 2013 (UTC)
Per consensus at Template_talk:Infobox_company#Replacing_full_stop_with_comma_at_the_num_employees_field and collapsed discussion below, there is a request to replace a full stop (.) as a thousands separation with a comma (,) at the num_employees fields in the companies infoboxes. Rationale and more detailed description is provided below. It also provides information which forums were notified about discussion. Beagel ( talk) 19:52, 1 February 2013 (UTC)
Extended content
|
---|
The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion. The following discussion is migrated from the company's infobox talk page. Beagel ( talk) 18:10, 1 February 2013 (UTC) A lot of current num_employees fields in the companies infoboxes use a full stop (.) to separate thousands (e.g. 12.200, 5.200) instead of using a comma (,). This is confusing as a full stop (.) usually means the decimal point and this is also violates WP:MOSNUM. E.g. the infobox at Minerva S.A. states that "num_employees = 7.000". It would be replaced by "num_employees = 7,000". Most of these (but not only) was introduced by blocked
User:Edson Rosa and his sockpuppets. As the number is a large (mainly concerning Brazilian companies) and they are hard to detect manually, I propose to use some bot for this task. If the proposal achieves consensus, it applies only to I notified also WP:COMPANIES and WP:MOSNUM about this proposal. Beagel ( talk) 18:12, 28 January 2013 (UTC) Comments
Support
OpposeThe discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
Why this discussion was closed just few minutes after filing as it was specifically about requesting a bot for a certain task? Where it should be discussed if not here? Beagel ( talk) 18:37, 1 February 2013 (UTC)
|
Is it possible to get a bot to place an unreferenced article tag on all articles without a single instance of "'''<ref>'''"? This would speed up the categorisation and reduce human editors workloads. Thanks ツ Jenova 20 ( email) 10:03, 8 February 2013 (UTC)
Doing this with a bot would incorrectly tag articles that use external links instead of inline references. Of course, these external links should ideally be replaced by inline citations but that's a separate issue. Pichpich ( talk) 16:10, 8 February 2013 (UTC)
<ref>...</ref>
, {{
sfn}}
or any other methods now known or to be invented. --
Redrose64 (
talk) 19:57, 12 February 2013 (UTC)Template bombing is a bit of a problem here, it disfigures articles and can hide the more important templates amidst a clutter of ones that are more about maintenance. There are two things that would reduce this problem:
Please can we have a bot to find and cull duplicate templates. Where their dates differ keep the earlier template. Ϣere SpielChequers 16:31, 8 February 2013 (UTC)
Please can we have a bot to find articles with more than three templates and ensure they are in the {{ multiple issues}} template. Currently we have some articles with lots of templates , and worse, ones where the most important templates are in a {{ multiple issues}} template and less serious ones are given far greater prominence by being a large separate template. Ϣere SpielChequers 16:31, 8 February 2013 (UTC)
I have added the above tasks to my BRFA here and the code is now written. Just have to wait for another trial for testing and bug fixing. If anyone can find any specific articles I could test the bot on that would be great as it is quite hard to find articles with duplicate tags on them. Cheers ·Add§hore· Talk To Me! 20:37, 8 February 2013 (UTC)
I have nominated 158 categories for merger at Wikipedia:Categories for discussion/Log/2013 February 13#States_and_territories_disestablished_before_1000CE.
This too many to tag manually, so I have made a list at
User:BrownHairedGirl/List of states and territories disestablished before 1000AD categories. Please can a bot tag all 158 of these categories with ? The tag should be at the top of the page.
{{subst:cfm|2=States_and_territories_disestablished_before_1000CE}}
Thanks. -- BrownHairedGirl (talk) • ( contribs) 18:23, 13 February 2013 (UTC)
<!--BEGIN CFD TEMPLATE--> <!-- Please do not remove or change this [[Template:Cfm]] message until the survey and discussion at [[WP:Cfd]] is closed --> {{Cfm full|day=13|month=February|year=2013|1=States_and_territories_disestablished_before_1000CE|target=}} <!-- End of Cfm message, feel free to edit beyond this point. --> <!--END CFD TEMPLATE-->
The first column of both tables at List of round barns contains a numbered list. This numbered list means nothing and also forces a change to every subsequent row if someone wants to add a row in. Is there a quick and easy way to completely remove the number column/entries? Ryan Vesey 19:35, 13 February 2013 (UTC)
If someone wants to work for bot-creation, there is a lot of creative work on Sanskrit wikipedia (sa.wikipedia.org) waiting. Those who want to volunteer can contact me via e-mail hmt[dot]seeit[at]gmail.com . Thanks a lot. - Hemant wikikosh ( talk) 13:41, 14 February 2013 (UTC)
Hi. I would like to make a request to remove interwiki links from articles that already have interwiki links provided by Wikidata. Thanks, Lord Sjones23 ( talk - contributions) 16:43, 15 February 2013 (UTC)
Category:Orissa articles missing geocoordinate data was recently renamed to Category:Odisha articles missing geocoordinate data through WP:CFDS. The trouble is, this category is populated by {{ coord missing}} - and with 287 articles that need to be changed, that's a small bit of drudgery to be gone though. Would it be possible to get a bot to search the contents of Category:Orissa articles missing geocoordinate data for "{{coord missing|Orissa}}" and replace all instances found with "{{coord missing|Odisha}}"? Thanks! - The Bushranger One ping only 07:44, 16 February 2013 (UTC)
I request a bot to automatically change all instances of "and/or" in the main namespace with "or", per WP:ANDOR, excluding any articles under Category:English grammar or one of its subcategories. ❤ Yutsi Talk/ Contributions ( 偉特 ) 04:20, 17 February 2013 (UTC)
At the moment, Wikivoyage is opt-in in {{ sister project links}}, while all other WMF projects (except Wikidata and Wikispecies, which are special) are opt-out. This doesn't make sense and should be changed. To do so, however, we would need a bot to go through all transclusions of {{ sister project links}} and add a "voy=no" parameter, except on articles that are about a town, state, country, etc. It would be fairly simple to use something like {{ infobox settlement}} as a guide, but it may not be very accurate. (I'm not sure if it is necessary to check for a corresponding page on Wikivoyage, as some of the search results on other pages may be useful as well.) — This, that and the other (talk) 10:22, 7 February 2013 (UTC)
Is it possible to scan our language articles to check that they link to the correct ISO codes?
There are 7,500 blue links on Wikipedia:WikiProject Languages/Primary language names in Ethnologue 16 by ISO code. If I could get a bot check to verify which are obviously correct, I could check any exceptions manually.
The parameter is whether the ISO code at the target article matches what we have on the ISO list. For example, the first link on the list page is [[Ghotuo language|aaa]]. The article Ghotuo language has a language infobox with the parameter "iso3" set equal to aaa, so that link is good. That should be easy to check by bot, assuming it can follow redirects.
The potentially matching parameters in {{ Infobox language}} are iso3, lc1, lc2, lc3, .... (lc-n is used where there is more than one ISO code.)
I'm hoping for a list of any language names which do not link to the matching ISO code. I suspect there are a fair number of circular links which need to be fixed. It would be nice if both pieces of data could be returned. So, if the example link were bad, we'd get back "Ghotuo language : aaa" or something similar.
Is that possible?
Thanks, — kwami ( talk) 06:42, 3 February 2013 (UTC)
I am trying to reactive WikiProject Dutch municipalities. One of the steps would be to assess all relevant articles, which are the articles on the municipalities and the articles about their subdivisions. All these articles are already tagged with the {{ WikiProject Netherlands}} banner as far as I can tell. These would need to be updated with |muni=yes |muni-importance=Low/Mid.
All municipalities would have to be automatically tagged as Mid and their subdivisions as Low importance. For each of 12 provinces there are lists of Cities, towns and villages and categories with all the municipalities. All articles occurring in both would need a Mid-importance tag, the articles only occurring in the former a Low-importance tag. Both would need the |muni=yes as well.
I am not fully familiar with the wikipedia bot process, but I assume this would be possible to do automatically. Could any of you advice me how to proceed with this? CRwikiCA ( talk) 20:22, 15 February 2013 (UTC)