This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | ← | Archive 4 | Archive 5 | Archive 6 | Archive 7 | Archive 8 | → | Archive 10 |
The majority of the US location articles that were added by Rambot are written in the present tense. "As of 2000, the population is... the average income is... the majority of families have..." etc. These should be changed to past tense.
It's pretty straightforward to do - within the demographics section, replace "is" with "was" (10 instances); replace "are" with "were" (12 instances); replace "have" with "had" (4 instances). I've been doing this manually when I come across them (see, for example, Jasper, New York), but that'll take a while across 30,000 articles and it seems to me like a task ideally suited to a bot. Any offers? -- OpenToppedBus - Talk to the driver 10:07, 3 February 2006 (UTC)
Was wondering if a bot could convert these star ratings (and their derivatives) in albums to plain text. Here is the discussion. Wikipedia_talk:WikiProject_Albums#Stars_to_text. Gflores Talk 06:14, 5 February 2006 (UTC)
I've been removing referral IDs from outgoing links I've been able to find every now and then. I frankly don't like the idea that someone could make money from Wikipedia by sneaking these links into places where they even could be considered legit.
What I'd like this bot to do is to find links that contain a referral ID, strip it off and post a normal one that works just as well.
example:
becomes:
http://www.amazon.com/gp/product/B000000W5L/sr=1-1/qid=1138522986/
O bli ( Talk) 23:11, 5 February 2006 (UTC)
Since fy: has been getting a higher load of real vandalism lately (as opposed to the occasional graffiti-editor), I'm looking for a way to revert the contributions of a specified anonymous user in a somewhat automated way. As the bot bit does have a function in this process, as done by administrators, I expected to find an actual bot to help them out, but I can't find it. If I'm blind, could someone point me in the right direction. If not, would someone be willing and able to write such a bot? The task seems pretty straight forward; it's just an awful lot of clicking when done by hand. 217.123.4.108 21:03, 7 February 2006 (UTC)
The backlog on Category:Cleanup by month is getting out of control, with 1.3% of Wikipedia currently tagged for cleanup. In order to speed the cleanup process, I propose a janitor bot to move "cleanup" pages that belong in other maintenance departments elsewhere.
The bot would have the following proposed behaviors:
The first two tasks appear to be within the capabilities of current bots, and should be easy to accomplish. Going through the capabilities of current Wikipedia:Bots does not show the capability to determine percentage of wiki links, but this is probably not a difficult task, as word counting and a repeated regexp search for /[[*]]/ should be all that's required.
These are the tasks that seem obviously automatable. Much of WP:CU requires human interaction; but at least we can figure out some of the human interaction that's necessary. Alba 00:39, 8 February 2006 (UTC)
I created {{ softredirect}} some time ago, and it seems to have been well received. It is supposed to be used instead of interwiki redirects, which do not work. However, it's hard to find the interwiki redirects without a bot. I'd like to ask for a bot to convert all interwiki redirects into uses of the {{ softredirect}} template. -- cesarb 01:25, 11 February 2006 (UTC)
Category:Stock exchanges appears to be populated by all the stock exchanges, most of which fall into the geographical categories of ...in Europe, ...in North America, ...in Asia, etc. According to WP:CAT, articles shouldn't be a broad category that is covered by a sharper category. What bots, if any, do mass recategorization like this? -- Christopherlin 18:03, 11 February 2006 (UTC)
Here's an easy one: phd, Phd, PhD and Ph.D should be changed into Ph.D. Kjaergaard 05:36, 17 February 2006 (UTC)
This was originally posted on WP:VPA, but User:Angela referred me here.
Does anyone have a robot that they could run which could change all occurences of "id=toc" into "class=toccolours"? They both look the same to most folk, but id=toc hides the division from folk who have preferences for "contents turned off". And 99% of these are not tables of contents but are related items link boxes. Editing the 1% by hand would be easier than the 99%.
For an example, see Elisa Oyj and Template:Finnishmobileoperators which has had this change done. -- SGBailey 21:17, 18 February 2006 (UTC)
It might be nice if American Cities/Towns were displayed as nicely as British towns were. E.g. on the right side are all the vital/geo stats, and the articles expalins the town in question. American towns currently get a red dot on a map of the state they are in, but the non-American readers probably have no idea where the dot is in realtion to the USA as a whole. In general, the system from American towns/cities/etc seems a bit American-centric. Maybe I'm off base. I wouldn't mind a discussion, as I feel I can learn quite a bit more about British towns in general than the American counterparts by the general info displayed.
The category Category:Eurovision Song Contest needs categorising better, but it's kind of a bit hard to do alone manually. I'm wondering if a bot could do it better. Here's what essentially needs to be done:
Thanks if anyone can help. Esteffect 00:18, 23 February 2006 (UTC)
I'm wondering if I can find any of these bots somewhere. Right now I'm using pyWikipedia with XML Dump file.
-- Manop - TH 20:56, 24 February 2006 (UTC)
There should be a bot for automating the processes of WP:COTW, WP:AID, and other collaborations. It should count the number of votes, remove failed nominations, and do everything listed under Wikipedia:Collaboration of the week/Maintenance if the clock reaches 18:00 Sunday, so people can have more time to work on expanding the collaborations. -- King of Hearts | (talk) 00:05, 25 February 2006 (UTC)
In disscussion at Template talk:Infobox Company there is an emerging consensus that the slogan field should be removed from the infobox, but we want to hang on to the data in the field. There are two things which you might be able to help us with, firstly could a bot create a list of the pages which contain infoboxes with slogans, and secondly would it be possible for a bot to remove the data and insert it perhaps as a section just before 'see also' with a heading such as 'corporate branding'? (The first would be very useful, the second is still subject to the discussion outcome - just trying to get a feel for what can be done) Many thanks Ian3055 22:09, 27 February 2006 (UTC)
We need a bot to replace Userbox and Userboxes with Wikipedia:Userboxes. Thank you. -- F a ng Aili 22:06, 3 March 2006 (UTC)
Is there a bot for adding an info box to all articles in a category? I checked the Wikipedia:bots page, but I didn't see a bot that would do what I'm thinking.
Currently the dinosaur pages on WP are in bad shape. There are several hundred categorized dinosaur stubs that could use an infobox, but manually adding them might take some time. Can't a bot do all that work instead?-- Firsfron 02:47, 6 March 2006 (UTC)
This suggestion is about a tool and not a bot, but I didn't know were to put it. I'm suggesting a tool that searches through a page's history and lists all the images that have been used, even if they were removed. This way, we could retrieve images that were replaced by better ones in the article, but still good for usage. The only problem, is that Wikipedia doesn't categorises images, that's why I think this tool would be useful. CG 21:22, 10 March 2006 (UTC)
Formerly names "Crimea" and "Ukraine" were used with the definite article. Today only Crimea and Ukraine (without article) are considered to be correct forms. But the Crimea and the Ukraine still can be found in many wiki-articles. IMHO it will be a good task for a bot - to remove "the". Don Alessandro 09:07, 12 March 2006 (UTC)
is there a bot that can perform search functions to link Wikipedia articles that are relevant to a Wikibook? Also, and, more generally, Wikicities, other areas of WikiBooks, Wiktionary, (And, I know, getting less likely the heuristic would be able to discern what was relevant, but i'd rather delete bad links than put in all of my own.) Google? Yahoo? Prometheuspan 18:18, 16 March 2006 (UTC)
I will look that up. As a side note, I had envisioned linking to as many other wikibooks as were relevant, and to as many wikipedia articles as possible. Prometheuspan 22:26, 28 February 2006 (UTC)
[edit] Bot Unfortunately, I haven't heard of anything like this. --Derbeth talk 23:40, 16 March 2006 (UTC)
What about a bot that links references to
religious texts to the approprate section of that wikibook? This may need to be written for each book seperately, but of particular interest to me would be the Jewish/Christian Bible and Islamic Qu'ran. If this would interest anyone, please contact me on my talk page!
Andrewjuren 20:50, 24 March 2006 (UTC)
Prometheuspan 00:33, 17 March 2006 (UTC)huh. You'd think they would have like an RSF or some such thing set up to link a new wiki to its parent networks like that. I have asked at the bot request wikipedia zone. Is there somebody else or someplace else to go look? Prometheuspan 00:33, 17 March 2006 (UTC)
Retrieved from " http://en.wikibooks.org/wiki/User_talk:Prometheuspan"
There has been discussion at Wikipedia talk:Categorization about repopulating some categories that had previously been depopulated after being divided into subcategories. One example is Category:American actors. There is a good deal of support for doing this. Before there was {{ CategoryTOC}} it was necessary to break large categories into smaller subcategories, and there is a value in having these smaller categories. However, categories also serve as the master index of subjects and it is often frustrating to have to look in several subcategories to browse through the articles in a subject. A good example of this is Category:Film directors. The proposal is to keep the subcategories, but also have articles duplicated in parent (or grand-parent) categories up to the level of topic articles.
I am wondering if a bot could be created to run frequently (once a day?) which would go through a list of categories that should be duplicated in other categories and check to see if the duplications exist. If they do not, they would be added. I suspect that there will need to be a page created to discuss this duplication process ( Wikipedia:Duplicated Categories?) and editing the list of duplications would probably have to be limited to admins. By having this bot, a person could add the lowest level category that applies and the category would also end up in the higher level categories. The bot would have to look at each article in the category and see if the higher level categorization exists, if it does not, the categorization would be added. For categories of people, the piping should be copied so that the article is alphabetized correctly.
Another bot might scan through the higher level lists and collate a list of articles that have not been put in any of the lower level subcategories.
I am just wondering if this is possible. There would have to be quite a bit of discussion about whether this should happen and how it will happen. First I want to know what is possible. Thanks. -- Samuel Wantman 10:21, 19 March 2006 (UTC)
It seems to be that a better solution would be to get the MediaWiki software to display all subcat articles of a particular category. If this feature is introduced in the future, carrying out the category population with a bot would have been a waste.-- Commander Keane 12:26, 20 March 2006 (UTC)
Hi there! Those on the Spoken Wikipedia Project would like to explore using a bot to help with our work. Here are a couple of things that have come up in discussions with other project members:
Right now, we have a manually-updated RSS feed that lists new articles that have been recorded. That way, project members and casual listeners can find our new content easily. It would be great if we had a way to automate this, to save SCEhardt the work. Let me know if you're interested in that project.
Currently, we use several tags for our project:
1. We have a tag that people use when they request an article to have read aloud and recorded.
2. We are discussing two tags that project members can add to the article's talk page:
Once an article is recorded and uploaded:
3. We have a couple of tags that go on the article's page itself:
4. In addition to that, we add a tag to the article's talk page
5. Additionally, we have a tag that goes on the Wikipedia:Featured articles page that alerts people that the article is available in an audio format, too.
6. Finally,
So as you can see, we use between 4 and 6 tags for each recording. They all serve a good purpose: they promote the project, help organize our work, and make sure that people can find our recordings.
However, it's a lot of work to do this. Not all of these tags can be automated, but it seems to me that at least a couple could be. For instance, #4 might be. And it would be very useful if we could automate #6.
Again, if this idea is something you;d like to pursue, I can re-explain all of this and/or provide more details. Ckamaeleon ((T)) 02:41, 20 March 2006 (UTC)
Nothing? D'oh! Ckamaeleon ((T)) 21:35, 21 May 2006 (UTC)
Several of AllyUnion's bots appear to have gone offline several days ago. It's only when the automated tasks that you are used to being done don't get done that you realize how much you depend on a bot. And this is currently the case. From AllyUnion's user page, it appears that he is mostly on wikibreak. I tried emailing him, but his email does not work. So I've left a message on his talk page. But if he's on break, who knows when he will see it.
So the next question becomes, how long do we wait until the bots are declared out of service, and how then can we get some other bots to pick up the duties. Specific bots that appear to be down include:
NekoDaemon being out of service is what brought this all to my attention, as CFD is one of my normal home playgrounds. But AFD bot appears to have an even more critical role. - TexasAndroid 15:00, 21 March 2006 (UTC)
I have been trying to clean up the cocktails articles, I have tagged about 90 articles for "move to wikibooks". Anyone have a bot that could transwiki them? They all have cocktail recipes in them, the majority are nothing but recipe. They'd need to end up in the wikibook Bartending. http://en.wikibooks.org/wiki/Category:Bartending_pages_needing_work Once transwikied, I could then clean up the wikipedia articles. -- Xyzzyplugh 09:36, 26 March 2006 (UTC)
a bot to make the following fixes to template could do much, much good to wikipedia (it,d avoid manual fixing of these things, at the very least lol):
id="toc"
with class="toccolours"
|-|}
,which makes no sense, or include |-|-
, which is equally nonsensicalmargin:0 auto;
style declarationclear:both;
style declarationCirceus 20:07, 31 March 2006 (UTC)
See Talk:Voivodes of the Polish-Lithuanian Commonwealth#Bot help for details. Thanks!-- Piotr Konieczny aka Prokonsul Piotrus Talk 03:27, 1 April 2006 (UTC)
We have lots of wars and we name them XX Years' War... which, is close XX Years War. I think the apostrophe is the more common way to do it... and the proper... but they are both used in some settings. Should this be bot-ted?
gren グレン 02:35, 2 April 2006 (UTC)
I can envision the development of a bot that searches for phrases such as "is a great", "is a fantastic", "is a terrific", "is an awful" ..etc. that could indicate strong POV within the article text. If the phrase appears within quoted text, i.e. as dialog, then it would be excluded.-- Hooperbloob 21:04, 2 April 2006 (UTC)
Someone has put the following paragraph in a bunch of articles on Ohio townships:
This is inaccurate. Some municipalities in Western Reserve townships, such as Cortland, Ohio, are independent from surrounding townships. Others, such as Newton Falls, Ohio, are part of the surrounding townships. This is no different than anywhere else in Ohio.
I'd like to see a bot that would delete the paragraph from all pages on which it is found. -- Mwalcoff 02:24, 4 April 2006 (UTC)
(There was User:Cobo of course, but that doesn't seem to have ever taken off.)
I find a lot of copyvios that have been dormant for months... and it seems like there are probably tons out there, if I can just check a few random articles and find one pretty fast. It seems like a bot with an organized approach would uncover thousands, and with an easy methodology... just select a few random 5-10 word phrases from the article, no punctuation, and search on Google, Altavista, etc for the exact phrase. The bot would make a list of any positive results. Of course it would have to ignore wikipedia mirrors. The odds of it listing a copyvio of something that is actually PD/GPL are low, in my experience, people are more interested in copying and pasting press releases, corporate bios, etc. than Project Gutenberg kind of stuff. But even still... that's where the human factor comes in.
The bot would just create a simple list of possible copyvios (with URLs), so it would be 100% non-invasive... humans (me, for example) would go through the list and handle as appropriate. The list could be stored in the bots userspace or wherever... I imagine it wouldn't be hard to drum up some people to go through it.
There are over a million articles now so it would be a lot of work and time... but afterwards it could perhaps monitor new articles (though that might be mre difficult to implement). Also since it's not live, I'm not even sure it would need to be flagged as a bot... all it would do is upload a list eventually, or in installments perhaps.
Anyway, I'm not a programmer... so I have no idea how hard this would be to implement. But given that it's not live, it could presumably be written in any language, up to Visual Basic. I've been thinking about this for a while though, and I think it would make a very positive impact on Wikipedia, and our goal of creating a truly free encyclopedia. Thoughts? -- W.marsh 22:37, 5 April 2006 (UTC)
Hello,
I am trying to find out if there is a bot created already that establishes a dictionary of terms found in a wiki site. Currently, there is no listing of definitions, and the terms end up being fairly convoluted at times.
I am able to generate a list of all the terms, and also to create definitions (manually), but I need to go back through the wiki site and create links from those terms to the dictionary. Further, some of the terms are too common to automatically replace using a bot, so they need to be removed from the "link creation" process, yet still remain in the dictionary. Is there something like this I can use as a bot base? Or is it simple to write?
Please note that this is for a mediawiki site. Is there a way that mediawiki can be set up to do this? (I have only glanced at the software docs, as I am not the admin).
Admins, you may email me with any responses. Thanks in advance!
Delfeld 04:37, 6 April 2006 (UTC)
I've noticed by browsing some of the Wikipedia Chinese articles that they will often link to an English page which has no corresponding link back to the Chinese version, and even one or two Chinese pages with no link to the relevant English article. I came across this actually several times in a short period of time, and would guess it to be not all that uncommon.
It would be helpful if someone could create a bot to scan pages and follow the links to different language versions, and make sure that all of the different translations are linked up. (i.e. that if a page exists in 15 different languages on any given topic, that each of those 15 versions has links to all 14 others).
Aside from just making it easier to find content in multiple languages, this may also encourage users to contribute in more than one language if they know the article exists in a second language they are familiar with.
Any thoughts?
-- Hughitt1 19:35, 6 April 2006 (UTC)
I am hoping to recruit a bot to help with the daily archiving at WP:AfC. The task used to be done by User:Uncle G's 'bot, and there were some plans for User:ShinmaBot take its place (along with some extra functions), but neither of their operators have been around recently. What's needed is three edits a day, shortly after 0000 UTC:
Can anyone help? × Meegs 18:13, 15 April 2006 (UTC)
User:Germen was using Image:Nl small.gif in his signature, which was deleted as a redundant image of Image:Flag of the Netherlands.svg. Could someone run a bot to replace all instances of the text [[File:Netherlands flag small.svg|25px]] (articles) with [[Image:Flag of the Netherlands.svg|25px]]? Thanks! ~ MDD 46 96 21:38, 20 April 2006 (UTC)
The catch-all regex you'd want for this might look something like this:
Personally I would recommend just removing it rather than replacing, but due to a history of problems with that user, I shall not be accepting this task. — Apr. 23, '06 [10:28] < freakofnurxture | talk>
I'm not sure if this is worth the time and effort, but what about a bot that would parse the Track Listing sections of pages at List of albums and create redirects to the album article? For example, the bot would create redirects like this one for the tracks at Operation: Mindcrime#Track listing. It would also create redirects for lowercase variations of song titles. Thanks, TheJabberwock 22:26, 20 April 2006 (UTC)
Note to whomever undertakes this task, it would be adviseable to create disambiguation pages in many cases. Thus, if the bot detects that the page it is about to create already exists as an {{ R from song}}, it could determine the artist/band and the year associated with both songs and create a dab page, e.g.:
'''Foo at Tiffany's''' can refer to:
*[[Foo at Tiffany's (Steely Dan song)]], from the 1988 album ''[[Becker's Mom (album)|Becker's Mom]]''
*[[Foo at Tiffany's (Tupac song)]], from the 2007 album ''[[R U Still Buyin' Dis?]]''
Obviously the titles are fictitious, but the scenario is real. Each of these two links would then be created as a redirect to the article about the album on which the song was released. I'm thinking the above should only apply in the case of unrelated songs. Cover versions should, in my opinion, redirect to the original release, or be combined on the same line of the disambiguation page, if "Foo at Tiffany's" also refers to something else, such as a film. This would most likely require manual intervention.
In the event that the title does refer only to a song, and equally notable versions of the same song have been released my more than one artist, it should probably be created as a distinct article explaining this. — Apr. 29, '06 [10:35] < freakofnurxture | talk>
Recently a template {{ daybar}} was created to allow easy navigation through artciles for individual days given in the format below: June 10, 2004, June 11, 2004 etc. (see template page for specifics). On those pages it can be seen in use. This template could be used as the standard format for navigating through such articles, however it would be tedious to add them by hand. Hence I suggest a semi-automatic bot to add this template appropriately in the dates from January 1, 2003 till now. LukeSurl 16:29, 21 April 2006 (UTC)
{{daybar|{{subst:PAGENAME}}|xxxx}}
. There are articles for most days, cant the ignore thing ignore nonexistent articles? --
Alfakim --
talk 13:42, 22 April 2006 (UTC)I Just wanna make my page or article "Don't Wanna lose You" that gets open when someone write "DON'T WANNA LOSE YOU", or "Don't Wanna Lose You" or "don't wanna lose you". I mean with diferent letters to the letters I've placed.—Preceding unsigned comment added by Charlie White ( talk • contribs)
I've moved the page to Don't wanna lose you which is the standard form for this. As far as I can tell, the system now works to direct any form of capitalisation to that page LukeSurl 16:07, 23 April 2006 (UTC)
Hello,
I don't know if this would be feasible, but I thought I would put it out there for discussion. It might be helpful (possible?) to create a bot that creates articles for all high schools. Just about every state has an article named "List of High Schools in XXXXX", XXXXX being the state, as well as a "Wikipedia Project Missing encyclopedic articles/High Schools/US/XXXXX", again XXXXX being the name of the state. The bot could (for instance) make an article with the name of the school and then the name of the city and state in parentheses (since a high school is often named the same as other high schools, ex: Southside High School (Gadsden, Alabama) and Southside High School (Fort Smith, Arkansas). The beginning article could be a stub that said only: "XXXXXXXXXX High School is a secondary education school located in CITY, STATE.", or something of that nature. It is my opinion that a lot of people that attend a high school, or alumni of a high school would be much more apt to edit an existing article on their high school than create a brand new article on their high school (much more so than other articles on wikipedia). ( Cardsplayer4life 20:12, 23 April 2006 (UTC))
It appears Wikipedia already has around 50,000 high school related pages [1], and a significant number of them are starved of content. I have a more useful suggestion, but it would require more effort to set up. A bot could scan the categories populated by the various {{*-school-stub}} templates, and for the articles we already have, e.g. Herbert Hoover High School (Des Moines), determine a [name, location] pair, form a search engine query, e.g. %2B%22Herbert+Hoover+High+School%22+%2B%22Des+Moines%22+-%22Wikipedia%22, produce a URL, e.g. [2], and post it to the article's talk page, to assist anybody who may have a keen interest in improving the article, but no idea where and how to find the info.
Further, with a little bit of fuzzy logic, it may be possible to ascertain some vital statistics (e.g. enrollment, year of establishment, mascot, name of current principal...) with a reasonable (but far from perfect) degree of accuracy, and enumerate them in the form of a bulleted list on the stub's talk page, including also a link to the page(s) from which the information was derived. A human could follow the links, verify the factoids, hopefully also locate additional information while visiting the various sites (some of them might be obscure news articles, for example), and then add the information to the article.
If somebody wanted to run a bot like that, and a critical mass of other people were willing to follow up on each result posting, this idea could be a successful operation, improve Wikipedia's coverage of non-notable schools, make the existing stubs worth keeping, and help reduce the community's animosity toward the WP:SCH project as a whole. If somebody wants to give this a serious attempt, I might be able to provide a bit of technical support. — Apr. 29, '06 [10:03] < freakofnurxture | talk>
The following appears to be a duplicate or very similar request to the one above. — Jun. 5, '06 [19:17] < freak| talk>
Is there a way of automatically placing the "featured article star" on the inbound "in other languages" links from other language wikipedias? Currently it is necessary to manually go through each linked language and add {{Link FA|en}} to the code, surely this could be done with a bot or something. Furthermore, the stars need to be removed when an article gets de-listed from FA status, this should also be done automatically. Is this even possible? Witty lama 04:18, 24 April 2006 (UTC)
.. would be a useful spelling fix. Colonies Chris 22:47, 24 April 2006 (UTC)
I think that when a page or more of text is deleted from a single article, a bot should undo the change. Since the standards of information that are put into the article are fairly high, anyone deleting a lot of this information is likely a vandal. For someone who is actually doing work, the bot should send them their alterations in it's response (So if it's valid they don't have to retype it) and give a link to a human editor they can appeal to if their editing wasn't vandalism.
Hi
There seem to be about 145 articles (Google search terms USS + Splashed) on US Navy ships of the Second World war that use the terms "splash", "splashing" or "splashed" as euphimisms for the shooting down by US Forces of aircraft flown specifically by Japaneese pilots, or crashes involving such aircraft. The wording appears to have been taken in verbatim from the Dictionary of American Naval Fighting Ships. It could be argued that this terminology makes light of the deaths of the young men involved. If this argument is accepted then is this something a bot could be used to fix?
PS My comment implies no endorsement of the Japanese military during WW2. I had a grand uncle who served in the pacific. -- Sf 16:19, 28 April 2006 (UTC)
This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | ← | Archive 4 | Archive 5 | Archive 6 | Archive 7 | Archive 8 | → | Archive 10 |
The majority of the US location articles that were added by Rambot are written in the present tense. "As of 2000, the population is... the average income is... the majority of families have..." etc. These should be changed to past tense.
It's pretty straightforward to do - within the demographics section, replace "is" with "was" (10 instances); replace "are" with "were" (12 instances); replace "have" with "had" (4 instances). I've been doing this manually when I come across them (see, for example, Jasper, New York), but that'll take a while across 30,000 articles and it seems to me like a task ideally suited to a bot. Any offers? -- OpenToppedBus - Talk to the driver 10:07, 3 February 2006 (UTC)
Was wondering if a bot could convert these star ratings (and their derivatives) in albums to plain text. Here is the discussion. Wikipedia_talk:WikiProject_Albums#Stars_to_text. Gflores Talk 06:14, 5 February 2006 (UTC)
I've been removing referral IDs from outgoing links I've been able to find every now and then. I frankly don't like the idea that someone could make money from Wikipedia by sneaking these links into places where they even could be considered legit.
What I'd like this bot to do is to find links that contain a referral ID, strip it off and post a normal one that works just as well.
example:
becomes:
http://www.amazon.com/gp/product/B000000W5L/sr=1-1/qid=1138522986/
O bli ( Talk) 23:11, 5 February 2006 (UTC)
Since fy: has been getting a higher load of real vandalism lately (as opposed to the occasional graffiti-editor), I'm looking for a way to revert the contributions of a specified anonymous user in a somewhat automated way. As the bot bit does have a function in this process, as done by administrators, I expected to find an actual bot to help them out, but I can't find it. If I'm blind, could someone point me in the right direction. If not, would someone be willing and able to write such a bot? The task seems pretty straight forward; it's just an awful lot of clicking when done by hand. 217.123.4.108 21:03, 7 February 2006 (UTC)
The backlog on Category:Cleanup by month is getting out of control, with 1.3% of Wikipedia currently tagged for cleanup. In order to speed the cleanup process, I propose a janitor bot to move "cleanup" pages that belong in other maintenance departments elsewhere.
The bot would have the following proposed behaviors:
The first two tasks appear to be within the capabilities of current bots, and should be easy to accomplish. Going through the capabilities of current Wikipedia:Bots does not show the capability to determine percentage of wiki links, but this is probably not a difficult task, as word counting and a repeated regexp search for /[[*]]/ should be all that's required.
These are the tasks that seem obviously automatable. Much of WP:CU requires human interaction; but at least we can figure out some of the human interaction that's necessary. Alba 00:39, 8 February 2006 (UTC)
I created {{ softredirect}} some time ago, and it seems to have been well received. It is supposed to be used instead of interwiki redirects, which do not work. However, it's hard to find the interwiki redirects without a bot. I'd like to ask for a bot to convert all interwiki redirects into uses of the {{ softredirect}} template. -- cesarb 01:25, 11 February 2006 (UTC)
Category:Stock exchanges appears to be populated by all the stock exchanges, most of which fall into the geographical categories of ...in Europe, ...in North America, ...in Asia, etc. According to WP:CAT, articles shouldn't be a broad category that is covered by a sharper category. What bots, if any, do mass recategorization like this? -- Christopherlin 18:03, 11 February 2006 (UTC)
Here's an easy one: phd, Phd, PhD and Ph.D should be changed into Ph.D. Kjaergaard 05:36, 17 February 2006 (UTC)
This was originally posted on WP:VPA, but User:Angela referred me here.
Does anyone have a robot that they could run which could change all occurences of "id=toc" into "class=toccolours"? They both look the same to most folk, but id=toc hides the division from folk who have preferences for "contents turned off". And 99% of these are not tables of contents but are related items link boxes. Editing the 1% by hand would be easier than the 99%.
For an example, see Elisa Oyj and Template:Finnishmobileoperators which has had this change done. -- SGBailey 21:17, 18 February 2006 (UTC)
It might be nice if American Cities/Towns were displayed as nicely as British towns were. E.g. on the right side are all the vital/geo stats, and the articles expalins the town in question. American towns currently get a red dot on a map of the state they are in, but the non-American readers probably have no idea where the dot is in realtion to the USA as a whole. In general, the system from American towns/cities/etc seems a bit American-centric. Maybe I'm off base. I wouldn't mind a discussion, as I feel I can learn quite a bit more about British towns in general than the American counterparts by the general info displayed.
The category Category:Eurovision Song Contest needs categorising better, but it's kind of a bit hard to do alone manually. I'm wondering if a bot could do it better. Here's what essentially needs to be done:
Thanks if anyone can help. Esteffect 00:18, 23 February 2006 (UTC)
I'm wondering if I can find any of these bots somewhere. Right now I'm using pyWikipedia with XML Dump file.
-- Manop - TH 20:56, 24 February 2006 (UTC)
There should be a bot for automating the processes of WP:COTW, WP:AID, and other collaborations. It should count the number of votes, remove failed nominations, and do everything listed under Wikipedia:Collaboration of the week/Maintenance if the clock reaches 18:00 Sunday, so people can have more time to work on expanding the collaborations. -- King of Hearts | (talk) 00:05, 25 February 2006 (UTC)
In disscussion at Template talk:Infobox Company there is an emerging consensus that the slogan field should be removed from the infobox, but we want to hang on to the data in the field. There are two things which you might be able to help us with, firstly could a bot create a list of the pages which contain infoboxes with slogans, and secondly would it be possible for a bot to remove the data and insert it perhaps as a section just before 'see also' with a heading such as 'corporate branding'? (The first would be very useful, the second is still subject to the discussion outcome - just trying to get a feel for what can be done) Many thanks Ian3055 22:09, 27 February 2006 (UTC)
We need a bot to replace Userbox and Userboxes with Wikipedia:Userboxes. Thank you. -- F a ng Aili 22:06, 3 March 2006 (UTC)
Is there a bot for adding an info box to all articles in a category? I checked the Wikipedia:bots page, but I didn't see a bot that would do what I'm thinking.
Currently the dinosaur pages on WP are in bad shape. There are several hundred categorized dinosaur stubs that could use an infobox, but manually adding them might take some time. Can't a bot do all that work instead?-- Firsfron 02:47, 6 March 2006 (UTC)
This suggestion is about a tool and not a bot, but I didn't know were to put it. I'm suggesting a tool that searches through a page's history and lists all the images that have been used, even if they were removed. This way, we could retrieve images that were replaced by better ones in the article, but still good for usage. The only problem, is that Wikipedia doesn't categorises images, that's why I think this tool would be useful. CG 21:22, 10 March 2006 (UTC)
Formerly names "Crimea" and "Ukraine" were used with the definite article. Today only Crimea and Ukraine (without article) are considered to be correct forms. But the Crimea and the Ukraine still can be found in many wiki-articles. IMHO it will be a good task for a bot - to remove "the". Don Alessandro 09:07, 12 March 2006 (UTC)
is there a bot that can perform search functions to link Wikipedia articles that are relevant to a Wikibook? Also, and, more generally, Wikicities, other areas of WikiBooks, Wiktionary, (And, I know, getting less likely the heuristic would be able to discern what was relevant, but i'd rather delete bad links than put in all of my own.) Google? Yahoo? Prometheuspan 18:18, 16 March 2006 (UTC)
I will look that up. As a side note, I had envisioned linking to as many other wikibooks as were relevant, and to as many wikipedia articles as possible. Prometheuspan 22:26, 28 February 2006 (UTC)
[edit] Bot Unfortunately, I haven't heard of anything like this. --Derbeth talk 23:40, 16 March 2006 (UTC)
What about a bot that links references to
religious texts to the approprate section of that wikibook? This may need to be written for each book seperately, but of particular interest to me would be the Jewish/Christian Bible and Islamic Qu'ran. If this would interest anyone, please contact me on my talk page!
Andrewjuren 20:50, 24 March 2006 (UTC)
Prometheuspan 00:33, 17 March 2006 (UTC)huh. You'd think they would have like an RSF or some such thing set up to link a new wiki to its parent networks like that. I have asked at the bot request wikipedia zone. Is there somebody else or someplace else to go look? Prometheuspan 00:33, 17 March 2006 (UTC)
Retrieved from " http://en.wikibooks.org/wiki/User_talk:Prometheuspan"
There has been discussion at Wikipedia talk:Categorization about repopulating some categories that had previously been depopulated after being divided into subcategories. One example is Category:American actors. There is a good deal of support for doing this. Before there was {{ CategoryTOC}} it was necessary to break large categories into smaller subcategories, and there is a value in having these smaller categories. However, categories also serve as the master index of subjects and it is often frustrating to have to look in several subcategories to browse through the articles in a subject. A good example of this is Category:Film directors. The proposal is to keep the subcategories, but also have articles duplicated in parent (or grand-parent) categories up to the level of topic articles.
I am wondering if a bot could be created to run frequently (once a day?) which would go through a list of categories that should be duplicated in other categories and check to see if the duplications exist. If they do not, they would be added. I suspect that there will need to be a page created to discuss this duplication process ( Wikipedia:Duplicated Categories?) and editing the list of duplications would probably have to be limited to admins. By having this bot, a person could add the lowest level category that applies and the category would also end up in the higher level categories. The bot would have to look at each article in the category and see if the higher level categorization exists, if it does not, the categorization would be added. For categories of people, the piping should be copied so that the article is alphabetized correctly.
Another bot might scan through the higher level lists and collate a list of articles that have not been put in any of the lower level subcategories.
I am just wondering if this is possible. There would have to be quite a bit of discussion about whether this should happen and how it will happen. First I want to know what is possible. Thanks. -- Samuel Wantman 10:21, 19 March 2006 (UTC)
It seems to be that a better solution would be to get the MediaWiki software to display all subcat articles of a particular category. If this feature is introduced in the future, carrying out the category population with a bot would have been a waste.-- Commander Keane 12:26, 20 March 2006 (UTC)
Hi there! Those on the Spoken Wikipedia Project would like to explore using a bot to help with our work. Here are a couple of things that have come up in discussions with other project members:
Right now, we have a manually-updated RSS feed that lists new articles that have been recorded. That way, project members and casual listeners can find our new content easily. It would be great if we had a way to automate this, to save SCEhardt the work. Let me know if you're interested in that project.
Currently, we use several tags for our project:
1. We have a tag that people use when they request an article to have read aloud and recorded.
2. We are discussing two tags that project members can add to the article's talk page:
Once an article is recorded and uploaded:
3. We have a couple of tags that go on the article's page itself:
4. In addition to that, we add a tag to the article's talk page
5. Additionally, we have a tag that goes on the Wikipedia:Featured articles page that alerts people that the article is available in an audio format, too.
6. Finally,
So as you can see, we use between 4 and 6 tags for each recording. They all serve a good purpose: they promote the project, help organize our work, and make sure that people can find our recordings.
However, it's a lot of work to do this. Not all of these tags can be automated, but it seems to me that at least a couple could be. For instance, #4 might be. And it would be very useful if we could automate #6.
Again, if this idea is something you;d like to pursue, I can re-explain all of this and/or provide more details. Ckamaeleon ((T)) 02:41, 20 March 2006 (UTC)
Nothing? D'oh! Ckamaeleon ((T)) 21:35, 21 May 2006 (UTC)
Several of AllyUnion's bots appear to have gone offline several days ago. It's only when the automated tasks that you are used to being done don't get done that you realize how much you depend on a bot. And this is currently the case. From AllyUnion's user page, it appears that he is mostly on wikibreak. I tried emailing him, but his email does not work. So I've left a message on his talk page. But if he's on break, who knows when he will see it.
So the next question becomes, how long do we wait until the bots are declared out of service, and how then can we get some other bots to pick up the duties. Specific bots that appear to be down include:
NekoDaemon being out of service is what brought this all to my attention, as CFD is one of my normal home playgrounds. But AFD bot appears to have an even more critical role. - TexasAndroid 15:00, 21 March 2006 (UTC)
I have been trying to clean up the cocktails articles, I have tagged about 90 articles for "move to wikibooks". Anyone have a bot that could transwiki them? They all have cocktail recipes in them, the majority are nothing but recipe. They'd need to end up in the wikibook Bartending. http://en.wikibooks.org/wiki/Category:Bartending_pages_needing_work Once transwikied, I could then clean up the wikipedia articles. -- Xyzzyplugh 09:36, 26 March 2006 (UTC)
a bot to make the following fixes to template could do much, much good to wikipedia (it,d avoid manual fixing of these things, at the very least lol):
id="toc"
with class="toccolours"
|-|}
,which makes no sense, or include |-|-
, which is equally nonsensicalmargin:0 auto;
style declarationclear:both;
style declarationCirceus 20:07, 31 March 2006 (UTC)
See Talk:Voivodes of the Polish-Lithuanian Commonwealth#Bot help for details. Thanks!-- Piotr Konieczny aka Prokonsul Piotrus Talk 03:27, 1 April 2006 (UTC)
We have lots of wars and we name them XX Years' War... which, is close XX Years War. I think the apostrophe is the more common way to do it... and the proper... but they are both used in some settings. Should this be bot-ted?
gren グレン 02:35, 2 April 2006 (UTC)
I can envision the development of a bot that searches for phrases such as "is a great", "is a fantastic", "is a terrific", "is an awful" ..etc. that could indicate strong POV within the article text. If the phrase appears within quoted text, i.e. as dialog, then it would be excluded.-- Hooperbloob 21:04, 2 April 2006 (UTC)
Someone has put the following paragraph in a bunch of articles on Ohio townships:
This is inaccurate. Some municipalities in Western Reserve townships, such as Cortland, Ohio, are independent from surrounding townships. Others, such as Newton Falls, Ohio, are part of the surrounding townships. This is no different than anywhere else in Ohio.
I'd like to see a bot that would delete the paragraph from all pages on which it is found. -- Mwalcoff 02:24, 4 April 2006 (UTC)
(There was User:Cobo of course, but that doesn't seem to have ever taken off.)
I find a lot of copyvios that have been dormant for months... and it seems like there are probably tons out there, if I can just check a few random articles and find one pretty fast. It seems like a bot with an organized approach would uncover thousands, and with an easy methodology... just select a few random 5-10 word phrases from the article, no punctuation, and search on Google, Altavista, etc for the exact phrase. The bot would make a list of any positive results. Of course it would have to ignore wikipedia mirrors. The odds of it listing a copyvio of something that is actually PD/GPL are low, in my experience, people are more interested in copying and pasting press releases, corporate bios, etc. than Project Gutenberg kind of stuff. But even still... that's where the human factor comes in.
The bot would just create a simple list of possible copyvios (with URLs), so it would be 100% non-invasive... humans (me, for example) would go through the list and handle as appropriate. The list could be stored in the bots userspace or wherever... I imagine it wouldn't be hard to drum up some people to go through it.
There are over a million articles now so it would be a lot of work and time... but afterwards it could perhaps monitor new articles (though that might be mre difficult to implement). Also since it's not live, I'm not even sure it would need to be flagged as a bot... all it would do is upload a list eventually, or in installments perhaps.
Anyway, I'm not a programmer... so I have no idea how hard this would be to implement. But given that it's not live, it could presumably be written in any language, up to Visual Basic. I've been thinking about this for a while though, and I think it would make a very positive impact on Wikipedia, and our goal of creating a truly free encyclopedia. Thoughts? -- W.marsh 22:37, 5 April 2006 (UTC)
Hello,
I am trying to find out if there is a bot created already that establishes a dictionary of terms found in a wiki site. Currently, there is no listing of definitions, and the terms end up being fairly convoluted at times.
I am able to generate a list of all the terms, and also to create definitions (manually), but I need to go back through the wiki site and create links from those terms to the dictionary. Further, some of the terms are too common to automatically replace using a bot, so they need to be removed from the "link creation" process, yet still remain in the dictionary. Is there something like this I can use as a bot base? Or is it simple to write?
Please note that this is for a mediawiki site. Is there a way that mediawiki can be set up to do this? (I have only glanced at the software docs, as I am not the admin).
Admins, you may email me with any responses. Thanks in advance!
Delfeld 04:37, 6 April 2006 (UTC)
I've noticed by browsing some of the Wikipedia Chinese articles that they will often link to an English page which has no corresponding link back to the Chinese version, and even one or two Chinese pages with no link to the relevant English article. I came across this actually several times in a short period of time, and would guess it to be not all that uncommon.
It would be helpful if someone could create a bot to scan pages and follow the links to different language versions, and make sure that all of the different translations are linked up. (i.e. that if a page exists in 15 different languages on any given topic, that each of those 15 versions has links to all 14 others).
Aside from just making it easier to find content in multiple languages, this may also encourage users to contribute in more than one language if they know the article exists in a second language they are familiar with.
Any thoughts?
-- Hughitt1 19:35, 6 April 2006 (UTC)
I am hoping to recruit a bot to help with the daily archiving at WP:AfC. The task used to be done by User:Uncle G's 'bot, and there were some plans for User:ShinmaBot take its place (along with some extra functions), but neither of their operators have been around recently. What's needed is three edits a day, shortly after 0000 UTC:
Can anyone help? × Meegs 18:13, 15 April 2006 (UTC)
User:Germen was using Image:Nl small.gif in his signature, which was deleted as a redundant image of Image:Flag of the Netherlands.svg. Could someone run a bot to replace all instances of the text [[File:Netherlands flag small.svg|25px]] (articles) with [[Image:Flag of the Netherlands.svg|25px]]? Thanks! ~ MDD 46 96 21:38, 20 April 2006 (UTC)
The catch-all regex you'd want for this might look something like this:
Personally I would recommend just removing it rather than replacing, but due to a history of problems with that user, I shall not be accepting this task. — Apr. 23, '06 [10:28] < freakofnurxture | talk>
I'm not sure if this is worth the time and effort, but what about a bot that would parse the Track Listing sections of pages at List of albums and create redirects to the album article? For example, the bot would create redirects like this one for the tracks at Operation: Mindcrime#Track listing. It would also create redirects for lowercase variations of song titles. Thanks, TheJabberwock 22:26, 20 April 2006 (UTC)
Note to whomever undertakes this task, it would be adviseable to create disambiguation pages in many cases. Thus, if the bot detects that the page it is about to create already exists as an {{ R from song}}, it could determine the artist/band and the year associated with both songs and create a dab page, e.g.:
'''Foo at Tiffany's''' can refer to:
*[[Foo at Tiffany's (Steely Dan song)]], from the 1988 album ''[[Becker's Mom (album)|Becker's Mom]]''
*[[Foo at Tiffany's (Tupac song)]], from the 2007 album ''[[R U Still Buyin' Dis?]]''
Obviously the titles are fictitious, but the scenario is real. Each of these two links would then be created as a redirect to the article about the album on which the song was released. I'm thinking the above should only apply in the case of unrelated songs. Cover versions should, in my opinion, redirect to the original release, or be combined on the same line of the disambiguation page, if "Foo at Tiffany's" also refers to something else, such as a film. This would most likely require manual intervention.
In the event that the title does refer only to a song, and equally notable versions of the same song have been released my more than one artist, it should probably be created as a distinct article explaining this. — Apr. 29, '06 [10:35] < freakofnurxture | talk>
Recently a template {{ daybar}} was created to allow easy navigation through artciles for individual days given in the format below: June 10, 2004, June 11, 2004 etc. (see template page for specifics). On those pages it can be seen in use. This template could be used as the standard format for navigating through such articles, however it would be tedious to add them by hand. Hence I suggest a semi-automatic bot to add this template appropriately in the dates from January 1, 2003 till now. LukeSurl 16:29, 21 April 2006 (UTC)
{{daybar|{{subst:PAGENAME}}|xxxx}}
. There are articles for most days, cant the ignore thing ignore nonexistent articles? --
Alfakim --
talk 13:42, 22 April 2006 (UTC)I Just wanna make my page or article "Don't Wanna lose You" that gets open when someone write "DON'T WANNA LOSE YOU", or "Don't Wanna Lose You" or "don't wanna lose you". I mean with diferent letters to the letters I've placed.—Preceding unsigned comment added by Charlie White ( talk • contribs)
I've moved the page to Don't wanna lose you which is the standard form for this. As far as I can tell, the system now works to direct any form of capitalisation to that page LukeSurl 16:07, 23 April 2006 (UTC)
Hello,
I don't know if this would be feasible, but I thought I would put it out there for discussion. It might be helpful (possible?) to create a bot that creates articles for all high schools. Just about every state has an article named "List of High Schools in XXXXX", XXXXX being the state, as well as a "Wikipedia Project Missing encyclopedic articles/High Schools/US/XXXXX", again XXXXX being the name of the state. The bot could (for instance) make an article with the name of the school and then the name of the city and state in parentheses (since a high school is often named the same as other high schools, ex: Southside High School (Gadsden, Alabama) and Southside High School (Fort Smith, Arkansas). The beginning article could be a stub that said only: "XXXXXXXXXX High School is a secondary education school located in CITY, STATE.", or something of that nature. It is my opinion that a lot of people that attend a high school, or alumni of a high school would be much more apt to edit an existing article on their high school than create a brand new article on their high school (much more so than other articles on wikipedia). ( Cardsplayer4life 20:12, 23 April 2006 (UTC))
It appears Wikipedia already has around 50,000 high school related pages [1], and a significant number of them are starved of content. I have a more useful suggestion, but it would require more effort to set up. A bot could scan the categories populated by the various {{*-school-stub}} templates, and for the articles we already have, e.g. Herbert Hoover High School (Des Moines), determine a [name, location] pair, form a search engine query, e.g. %2B%22Herbert+Hoover+High+School%22+%2B%22Des+Moines%22+-%22Wikipedia%22, produce a URL, e.g. [2], and post it to the article's talk page, to assist anybody who may have a keen interest in improving the article, but no idea where and how to find the info.
Further, with a little bit of fuzzy logic, it may be possible to ascertain some vital statistics (e.g. enrollment, year of establishment, mascot, name of current principal...) with a reasonable (but far from perfect) degree of accuracy, and enumerate them in the form of a bulleted list on the stub's talk page, including also a link to the page(s) from which the information was derived. A human could follow the links, verify the factoids, hopefully also locate additional information while visiting the various sites (some of them might be obscure news articles, for example), and then add the information to the article.
If somebody wanted to run a bot like that, and a critical mass of other people were willing to follow up on each result posting, this idea could be a successful operation, improve Wikipedia's coverage of non-notable schools, make the existing stubs worth keeping, and help reduce the community's animosity toward the WP:SCH project as a whole. If somebody wants to give this a serious attempt, I might be able to provide a bit of technical support. — Apr. 29, '06 [10:03] < freakofnurxture | talk>
The following appears to be a duplicate or very similar request to the one above. — Jun. 5, '06 [19:17] < freak| talk>
Is there a way of automatically placing the "featured article star" on the inbound "in other languages" links from other language wikipedias? Currently it is necessary to manually go through each linked language and add {{Link FA|en}} to the code, surely this could be done with a bot or something. Furthermore, the stars need to be removed when an article gets de-listed from FA status, this should also be done automatically. Is this even possible? Witty lama 04:18, 24 April 2006 (UTC)
.. would be a useful spelling fix. Colonies Chris 22:47, 24 April 2006 (UTC)
I think that when a page or more of text is deleted from a single article, a bot should undo the change. Since the standards of information that are put into the article are fairly high, anyone deleting a lot of this information is likely a vandal. For someone who is actually doing work, the bot should send them their alterations in it's response (So if it's valid they don't have to retype it) and give a link to a human editor they can appeal to if their editing wasn't vandalism.
Hi
There seem to be about 145 articles (Google search terms USS + Splashed) on US Navy ships of the Second World war that use the terms "splash", "splashing" or "splashed" as euphimisms for the shooting down by US Forces of aircraft flown specifically by Japaneese pilots, or crashes involving such aircraft. The wording appears to have been taken in verbatim from the Dictionary of American Naval Fighting Ships. It could be argued that this terminology makes light of the deaths of the young men involved. If this argument is accepted then is this something a bot could be used to fix?
PS My comment implies no endorsement of the Japanese military during WW2. I had a grand uncle who served in the pacific. -- Sf 16:19, 28 April 2006 (UTC)