From Wikipedia, the free encyclopedia

Note: Unless stated otherwise or directly sourced, the statistics used in this essay were collected in February 2007 and may now be out-of-date.

Contrast with essay: Wikipedia:Wikipedia is succeeding. See also Wikipedia:Failure, on the virtues of failure.

Is Wikipedia failing in its aim of becoming a reputable, reliable encyclopedia? Here are some illustrations of ways in which it is not fulfilling that aim.

Assumptions

To assess the quality of Wikipedia's articles, some assumptions are necessary. Here it is assumed that:

  • The criteria defined by the Wikipedia 1.0 editorial team at {{ grading scheme}} accurately reflect the quality of the articles to which these ratings have been applied.
  • That articles that are neither FA nor A-class fall below the standards that an encyclopedia should demand of its content (possibly with the exception of some WikiProjects that have chosen to have GA above A-class instead of the other way round).
  • That the sample of 300,000 articles assessed, with results listed at WP:1.0/I, is representative of the whole encyclopedia.
  • The definition of an encyclopedia in Wikipedia's article encyclopedia as a "compendium of human knowledge" is correct.

Criteria which indicate substantial failings

Performance on core topics

Success is not guaranteed.

Vital articles lists 988 articles on topics that can be considered essential. These topics should have articles of the very highest quality – ideally a featured article. So do they? In fact, of those 988, only 81 are featured articles and 9 are A-class. This means that 89% of the essential topics that should have excellent articles fall short of the standard, assuming that all vital articles that meet the FA criteria have been nominated for FA status.

Do they fall short by a long way? 65 are listed as good articles, which, according to Template:Grading scheme, means that 'other encyclopedias could do a better job'. Some editors have criticised the GA process as inconsistent and arbitrary, so the quality of those articles is further in doubt. The remaining 833 are B-class, C-class, stub-class or start-class on the assessment scale; this indicates that many articles require substantial work before they will match or exceed the standards found in other encyclopedias.

On current trends, how long will it take before all the Vital Articles are featured or A-class articles? On 1 January 2006, 41 of them were featured; by 1 January 2007, this had risen to 71. By 1 July 2011 (four and a half years later) the number was 90, indicating a sharp decrease in promotion rate. Even assuming that the current rate (19 articles in 54 months) declines no further, at this rate of approximately four a year it will take 225 years for all of the vital articles to reach the standards expected of them.

Performance on broader topics

There are 6,464 featured articles now. There are also 39,389 good articles. However, there are currently 6,803,974 articles on Wikipedia. This means that slightly more than 99.26% of all the articles on Wikipedia have not yet been assessed as featured or good articles. In many cases this is because they are not considered well-written, verifiable, broad, or comprehensive in their coverage. The results of the largest-scale assessment of Wikipedia content, covering 18% of the total number of articles, can be found at WP:1.0/I. These results show that 0.7% of assessed articles are either FAs or A-class articles.

One useful, informal exercise for a reader is to critically read ten random articles. The numbers above suggest that on average, you'd expect to find one FA or A-class article in every 143 articles you looked at (based on WP:1.0/I), or every 762 (based on total numbers of FAs and A-class articles).

Maintenance of standards

Wikipedia Celebrates 750 Years Of American Independence
"…While Wikipedia's "American Independence" page remains available to all site visitors, administrators have suspended additions and further edits to its content due to vandalism." The Onion July 26, 2006 [1]

Do articles which are judged to have reached the highest standards remain excellent for a long time, or do standards decline as well-meant but poor quality edits cause standards to fall over time? There are currently 340 former featured articles, so that more than 20% of all articles that have ever been featured are no longer featured.

Many editors observe that an FA that is not actively maintained inevitably declines; for an example see Ryanair, which attracts large numbers of highly biased edits which have wrecked a formerly excellent article. Sun's lead section was reduced to a few short sentences by an editor who either hadn't read or didn't understand the guidelines on what a lead section is supposed to be, and no-one has restored the previously existing summary. A whole section of Mauna Loa was removed by a vandal in November, and was not restored for a month. Generally, if the primary author of an FA does not take care of it, checking changes up to several times a day, it is likely to have its quality compromised by unnoticed vandalism or, far more damaging in the long term, well-intentioned but poor quality edits.

Some or many articles may lose featured article status because they do not meet current standards, rather than because they have declined in quality. Without case-by-case analysis it is impossible to say what proportion this is the case for. However, we can note that the featured article review process has not been as successful as would be ideal at encouraging featured articles to improve in line with rising standards.

Meanwhile, on the other hand, certain hoax articles containing blatantly incorrect information can stay up for years [2]. In January 2008, the longest-running hoax ever was discovered, the article on Brahmanical See, which was a hoax claiming that Hinduism has a Pope. This hoax existed for roughly 3½ years.

Rate of quality article production

Many argue that Wikipedia is a work in progress and that, given time, all articles will reach very high standards. Unfortunately, this is not borne out by the rate at which articles are currently being judged to meet featured article criteria. About one article a day on average becomes featured; at this rate, it will take 4,380 years for all the currently existing articles to meet FA criteria. If the current approximately exponential growth rate of Wikipedia (which will see it double in size in about the next 500 days) continues, then on current trends there will never be a time when all articles have been promoted to featured article status.

Should we even expect all articles to meet the featured article criteria? A majority of people who commented on one earlier discussion felt that the featured article criteria do indeed define the standards that all articles need to meet.

Is WP:FA a bottleneck? The rate at which articles have been promoted has remained more or less constant for well over a year (see WP:GAS), while article creation rates have increased exponentially throughout that time. If the system prevents large numbers of quality articles from being recognised as such, then that indicates that some kind of reform of the system is necessary.

Special:Recentchanges provides evidence that the rate of addition of substantial encyclopaedic content is low. You may find it informative to look at the last 200 recent changes and count how many of them are directly building the encyclopedia. That means observing reasonably sound content being added (and not under a 'trivia' header) to an article that is not a borderline AFD candidate. (The info on bytes added/removed narrows the search quite quickly.) Typically this reveals less than ten substantive article-space change in 200. One such analysis can be found at User:Opabinia regalis/Article statistics.

The strength and size of the core community

From 2007 to 2012, the total number of active Wikipedia editors has gradually declined. [3] [4]

One of the most important aspects of Wikipedia is what could be called its "core community," as distinguished from the "community-at-large." The distinction is that the community-at-large is composed of everyone who edits Wikipedia, while the core community is the small group of veteran editors who regularly watch policy pages, facilitate the administration of various aspects of the site (such as mediation, the help desk, the reference desk, the various noticeboards, etc..). Earlier in Wikipedia's history, there was a project called Esperanza designed to strengthen Wikipedia's core community. In mid-2007, Esperanza was disbanded after facing criticism. At first, other projects were started to fulfill the same role that Esperanza did. Several of these projects have since been abandoned.

Projects stemming from Esperanza which were abandoned:

Projects stemming from Esperanza which are still ongoing:

Then there is also Expert rebellion and Expert retention to consider. Assuming the information in both articles is roughly correct, it appears that Wikipedia has experienced or is experiencing a brain drain, which of course is detrimental to its success. This criticism has been expressed before by a number of people [5] and the likelihood of this brain drain has not been explored in current statistical analysis. However, one disturbing statistic is the sudden drop-off in new users in October 2006. Every month since Wikipedia was started, its userbase grew exponentially. In the month of October, 2006, however, the growth of its userbase slowed. [6] Was this statistical noise or not? Well, there have been similar drop-offs in user growth across several Wikipedias, suggesting it is not. [7] In each case, as soon as a drop-off occurs, Wikimedia ceases publishing statistics on user growth. Why this has occurred is not clear and no new statistics have been provided as of September, 2007. One possible explanation is stated on Wikimedia's main page on statistics: [8]

Since a year it has become increasingly difficult to produce valid dumps for the largest wikipedias. Until that problem is fixed some figures will be outdated.

This problem, stemming from either financial shortfalls or incompetence, is itself a demonstration of Wikipedia failure. If Wikimedia isn't even capable of regularly collecting and compiling statistics on Wikipedia's success, then what can it do and how can we expect the Wikipedia project to succeed?

Questioning these criteria

Is it a bad idea to use Featured Article or Good Article status as criteria for judging the number of excellent articles in Wikipedia? It is possible that many or most articles that meet the featured article criteria or good article criteria have not been officially reviewed, because review is a time-intensive process that often suffers from a backlog of nominated articles. The Good Article process historically has had a much less rigorous promotion process than the featured article process, so some editors reject it as a measure of article quality. In addition, the editors who work on articles that could potentially pass either the "good" or "featured" criteria may decline to participate in those processes because they see them as bureaucratic, unpleasant, and predominantly run by non-experts in specialized topics.

If these processes do not succeed in recognizing quality content, then this may be a failure of Wikipedia to perform accurate self-assessment rather than a failure to produce quality articles.

Gender bias

Former Wikimedia Foundation executive Sue Gardner provided nine reasons, offered by female Wikipedia editors, "Why Women Don't Edit Wikipedia." [9]

The issue of gender bias on Wikipedia also known as the gender gap or gender imbalance, is the finding that between 84 and 91 percent of Wikipedia editors are male, [10] [11] which leads to systemic bias. It is one of the criticisms of Wikipedia. The Wikipedia community has acknowledged the problem and is attempting to narrow this gender gap. In August 2014, Wikipedia co-founder Jimmy Wales announced in a BBC interview the Wikimedia Foundation's plans for "doubling down" on the issue of gender bias at Wikipedia. Wales said the Foundation would be open to more outreach and more software changes. [12]

Food for thought

If Wikipedia just aimed to be a social website where people with similar interests could come together and write articles about anything they liked, it would certainly be succeeding. However, its stated aim is to be an encyclopedia, and not just that but an encyclopedia of the highest quality. Almost two decades of work have resulted in 3,000 articles of good or excellent quality, at which rate it will take many more decades to produce the quantity of good or excellent articles found in traditional encyclopedias. Over millions of articles are mediocre to poor in quality.

Open questions

  • Has the system failed to produce a quality encyclopedia? If so, why?
  • Is change necessary?
  • If it is, then is radical change required, or just small adjustments to the current set-up?
  • Does this matter, given that Wikipedia is one of the most popular websites in the world?
  • Does popularity establish authenticity?
  • What is Wikipedia really, and what do we want it to be?
  • Are the statistical measures introduced here relevant to the conclusions drawn?
  • Are Wikipedia's own criteria for success accurately reflected here?
  • Are the Featured Article and Good Article designations useful for determining the number of quality articles in Wikipedia? If they are not, how can they be reformed?
  • At what rate is the number of new user accounts increasing?
  • Does the number of active users increase in the same way as new user accounts, or do significant numbers of editors leave the project?
  • Could it help any to introduce one or more of the following:
  1. A clearer vision and mission statement, prominently displayed?
  2. Better defined performance metrics for articles or edits?
  3. Voting, as popularity, or by "distinguished members" (opening a can of chicken/egg soup here)?
  4. Profiling authors/editors to identify promising candidates or repeat offenders so as to offer them voluntary coaching/mentoring (on private channels, not in public on discussion pages)?
  5. Offering references to alternative sites so as to channel creative energies of writers who repeatedly fail to meet Wikipedia criteria?

Responses to alleged rebuttals and inadequate responses

The sister essay contains a great deal of "rebuttals" and "responses" to this essay based upon a number of sources. This essay concludes that these alleged "rebuttals" in the sister essay are weak and its responses are inadequate. It relies on pseudoscientific statistical special pleading which seems to both stem from and play on the general public's misconception of appropriate statistical analysis and science. This is one the root causes of problems on Wikipedia to begin with. The poor quality of the Wikipedia:Wikipedia is succeeding is itself a reflection of the systemic problems of pseudoscience and fringe theories on Wikipedia.

Clarifying possible misconceptions

A popular misconception among the public, which is also encouraged by the media, is the claim, "A scientist proved it in a study, so it must be true!" Science relies on proper methodology, objectivity, and replicability, among other things. The sister essay invokes a handful of studies without addressing criticism of their methodology or the fact that they arguably haven't been replicated. In particular, the single-study by Nature is held up as the holy grail proof that Wikipedia is as accurate as Britannica, though its specific methodology is ignored, peer-reviews aren't cited, and so on.

In addition, it is important to clarify that correlation does not imply causation, another fallacy the essay seems to evoke by repeatedly evangelizing about Wikipedia's growth without narrowing in on where that growth has happened and how, that is, specifying the cause.

Outside scientific studies confirming Wikipedia failure

First of all, it should be clarified that the burden of proof rests on those making positive assertions, which includes "Wikipedia is succeeding." In a 2005 study, Emigh and Herring note that there are not yet many formal studies of Wikipedia or its model, and suggest that Wikipedia achieves its results by social means— self-norming, a core of active users watching for problems, and expectations of encyclopedic text drawn from the wider culture. [13] Such assumptions regarding "self-norming" appear to bear similarities with pseudoscientific Marxist and anarchist assumptions about human behavior under socialism and anarchism. While Wikipedia model has not been evaluated, a broad array of research in psychology, sociology, and economics would likely undermine it. Specifically in psychology, cognitive biases (which are made worse, not better through group participation, see groupthink), in sociology group dynamics, and in economics, there are things to consider such as information asymmetry and bounded rationality.

With this in mind, it should also be clarified that the assertion that Wikipedia failure has not in any way been validated by any outside studies is patently false. Just as the definition of "success," has been skewed, if we inappropriately define "failure," as "total apocalyptic, nightmarish collapse," then no, that hasn't happened yet, so of course it hasn't been proven. But failure is more accurately defined by the question: "Is Wikipedia moving in the right direction"? According to a number of studies, such as the study by the University of Minnesota, no. [14]

Their abstract reads:

Wikipedia’s brilliance and curse is that any user can edit any of the encyclopedia entries. We introduce the notion of the impact of an edit, measured by the number of times the edited version is viewed. Using several datasets, including recent logs of all article views, we show that frequent editors dominate what people see when they visit Wikipedia, and that this domination is increasing. Similarly, using the same impact measure, we show that the probability of a typical article view being damaged is small but increasing, and we present empirically grounded classes of damage. Finally, we make policy recommendations for Wikipedia and other wikis in light of these findings.

Their specific policies recommendations:

It is likely that vandals will continue working to defeat the bots, leading to an arms race. Thus, continued work on automatic detection of damage is important. Our results suggest types of damage to focus on; the good news is that the results show little subtlety among most vandals. We also generally believe in augmentation, not automation. That is, we prefer intelligent task routing approaches, where automation directs humans to potential damage incidents, but humans make the final decision.

This proposal has been completely ignored, but it is a proposal that has merit. For instance, in order to address the problem of unreliable sources and fringe views, there could be the creation of a " greylist" which automatically generates a list of articles which likely contain inappropriate edits, based upon the likelihood of certain sources to be regularly misused again and again. This could more appropriately address extreme violations of WP:NPOV and WP:V, which are not captured by bots, while at the same time allowing humans to make the final decision as to what constitutes a "reliable source" or not.

As for the study by Nature magazine suggesting Wikipedia is as reliable as Encyclopedia Britannica, [15] this essay rejects that study on the grounds that it invokes the same flawed methodological assumptions in the sister essay. Furthermore, it is possible that the more specific claims made by The Register regarding the study may have merit:

"…Nature sent only misleading fragments of some Britannica articles to the reviewers, sent extracts of the children's version and Britannica's 'book of the year' to others, and in one case, simply stitched together bits from different articles and inserted its own material, passing it off as a single Britannica entry." [16]

Encyclopedia Britannica rejected the study and while their analysis is unreliable for obvious reasons of bias, their claims do support a possible hypothesis, which itself bolsters the conclusion of this essay. Britannica denied the validity of the Nature study, claiming that it was "fatally flawed" on the grounds that the Britannica extracts were compilations that sometimes included articles written for the youth version. [17] Nature acknowledged the compiled nature of some of the Britannica extracts, but disputed the claim that this invalidated the conclusions of the study. [18] Encyclopedia Britannica also argued that the Nature study showed that while the error rate between the two encyclopedias was similar, a breakdown of the errors indicated that the mistakes in Wikipedia were more often the inclusion of incorrect facts, while the mistakes in Britannica were "errors of omission".

There was also a study in the journal Reference Services Review which found that Wikipedia is less reliable than other references. From their abstract: [19]

The study did reveal inaccuracies in eight of the nine entries and exposed major flaws in at least two of the nine Wikipedia articles. Overall, Wikipedia's accuracy rate was 80 percent compared with 95–96 percent accuracy within the other sources. This study does support the claim that Wikipedia is less reliable than other reference resources.

Supporters of the claim that WP:Wikipedia is succeeding may cite the study by Fernanda Viégas of the MIT Media Lab and Martin Wattenberg and Kushal Dave of IBM Research which found that most vandal edits were reverted within around five minutes. [20] However, this isn't a particularly controversial or striking conclusion, nor is it particularly relevant. The same conclusion was reached by the researchers of the University of Minnesota. From a sociological perspective, Wikipedia's ability to prevent obvious vandalism is intriguing, but that alone is not how Wikipedia's success is defined, since the problems stem from systemic bias and erosion of good content, which, unlike random vandalism, cannot simply be addressed through the use of large networks of bots crawling Wikipedia and making automatic reverts according to a set algorithm. The development of such a network of bots, according to the University of Minnesota, is largely one reason why blatant vandalism is difficult on Wikipedia.

A study by Dartmouth College found that there are perverse incentives involved in the way Wikipedia works, such that good editors find absolutely no social rewards for good editing while paradoxically a great deal of good editors, which are not vandals or trolls, only edit Wikipedia a few times then leave. [21] This offers an empirical basis for the hypothesis of brain drain and the idea that many potentially good contributors are regularly turned away by Wikipedia's chaotic process, while the best veteran editors have to struggle to keep believing in the project, when they continually face so many unnecessary obstacles to appropriate edits. original research? unbalanced opinion? This study has been criticized as having a flawed methodology, but it was received favorably by the Scientific American community, despite its flaws. [22] [23]

An academic study of Wikipedia articles, circa September 2007, also found that the level of debate among Wikipedia editors on controversial topics often degenerated into counterproductive squabbling: "For uncontroversial, 'stable' topics self-selection also ensures that members of editorial groups are substantially well-aligned with each other in their interests, backgrounds, and overall understanding of the topics...For controversial topics, on the other hand, self-selection may produce a strongly misaligned editorial group. It can lead to conflicts among the editorial group members, continuous edit wars, and may require the use of formal work coordination and control mechanisms. These may include intervention by administrators who enact dispute review and mediation processes, [or] completely disallow or limit and coordinate the types and sources of edits." [24]

In conclusion, this essay finds support in studies conducted by the University of Minnesota, the University of Dartmouth, the University of Florida, and the study published in Reference Services Review.

An absolute definition of Wikipedia success

"Absolute" statistics in scientific analysis are generally to be regarded with immediate skepticism, because they do not measure continuous rates of change, only self-selected variables at certain fixed points in time ("snapshots"). The sister essay's unique definition of "success," appears to differ widely from the goal of the Wikipedia project explicitly stated in Wikimedia's Mission statement, early statements made about Wikipedia made by Jimbo Wales, and the definition of encyclopedia itself as a "compendium of human knowledge." Based on these definitions, one cannot argue that Wikipedia is succeeding because it has managed to develop an absolute amount of content that furthers that goal, while overall the majority of its resources go against the goal, through encouraging public ignorance and misinformation, and wasting resources on hosting unencyclopedic information on a website calling itself an "encyclopedia." The rebuttal also carelessly invokes the massive growth of Wikipedia without noting exactly where that growth has happened. This essay acknowledges the growth of Wikipedia but asserts that such growth is not happening in the right places and in the right ways.

According to Wikimedia's mission statement, the goal of Wikipedia is "to empower and engage people around the world to collect and develop educational content... ...and to disseminate it effectively and globally."

The definition of "success," appears to have been downgraded recently in response to the clear evidence that Wikipedia is failing. The argument, then, is not that "Wikipedia is succeeding," but that "Encyclopedias were never that useful to begin with." In response to criticism, Wikipedia should change, not lower the bar.

This is demonstrated by comparing earlier statements made by Jimbo Wales with more recent cynicism and skepticism that got greater and greater as time went on.

Imagine a world in which every single person on the planet is given free access to the sum of all human knowledge. That's what we're doing.

— Jimmy Wales, July 2004 [25]

I frequently counsel people who are getting frustrated about an edit war to think about someone who lives without clean drinking water, without any proper means of education, and how our work might someday help that person. It puts flamewars into some perspective, I think.

— Jimmy Wales, July 2004 [25]

We help the internet not suck.

— Jimmy Wales, September 2005 [26]

Our goal has always been Britannica or better quality. We don't always achieve that.

— Jimmy Wales, September 2005 [26]

No, I don't think people should cite [Wikipedia], and I don't think people should cite Britannica, either – the error rate there isn't very good. People shouldn't be citing encyclopedias in the first place.

— Jimmy Wales, September 2005 [27]

I can NOT emphasize this enough. There seems to be a terrible bias among some editors that some sort of random speculative 'I heard it somewhere' pseudo information is to be tagged with a 'needs a cite' tag. Wrong. It should be removed, aggressively, unless it can be sourced. This is true of all information, but it is particularly true of negative information about living persons.

— Jimmy Wales, May 2006 [28]

In a discussion on IRC, on January 27, 2008, User:Bjweeks, who was defending Wikipedia's credibility stated:

People want to know about Harry Potter's gay teacher, that is why people use Wikipedia.

—  User:Bjweeks Wikipedia IRC [29]

The sister essay to this article relies on an "absolute measure" of Wikipedia's success, which doesn't seem to be quite what Jimbo originally had in mind when he founded Wikipedia along with the Wikimedia foundation's original definition of "success," or even the claim that Wikipedia should be "Britannica or better quality." To have the same quality as Britannica would involve having the same marginal rate of accuracy, a relative measure of success, not the some arbitrary absolute amount of encyclopedic content. The absurdity of an "absolute" measure of success or quality can be demonstrated through the following analogy: Let's say you have a book not owned by Britannica, but which contains 100% every piece of information contained with Encyclopedia Britannica. Let us then say that you place this book on top of a landfill, which is continually growing and even often defended because of the policy WP:PAPER. What you end up with is this:

Non-featurable content isn't necessarily "garbage," but the analogy fits because it's not the kind of stuff that a traditional encyclopedia editor could take and put a rubber-stamp on it, to say, "This is good encyclopedic material!"

To look at that and say, "Wikipedia is succeeding! I mean, look, all the information in Britannica is there and besides, in German Wikipedia, the landfill isn't growing!" is absurd. If Wikipedia were to delete everything other than the articles which are currently featured and then focus on creating articles worth featuring, then it would be sensible to call it an encyclopedia that is roughly comparable in accuracy to Britannica.

The essay regularly engages in such special pleading, including the following assertion:

"...many B- and Start-level articles are indeed superior to their counterparts in standard encyclopedias, such as the Encyclopædia Britannica. For example, the coverage of the B-level article, Secondary structure, a core topic in protein science, is far superior to its coverage in the Britannica..."

Many humans have red hair, therefore most or all humans have red hair? This assertion doesn't follow, not to mention that it invokes an ambiguous subjective opinion which shouldn't be found in value-free scientific analysis.

Growth, by itself, is not necessarily a good thing if the "growth" is in unencyclopedic material rather than encyclopedic material. If an article is "roughly comparable" in quality in Britannica, then it is fair to assume that it should be featured.

Wikipedia success is qualitative, not quantitative

The sister essay seems to assume that Wikipedia success is quantitative rather than qualitative. Wikipedia success is defined, not by the quantity of the articles, but by the quantity of high-quality articles. As an example, they cite an "independent test".

To demonstrate the above point, we return to the "landfill" analogy. Two common objects found in landfills are banana peels and refrigerators, the latter being far larger than the former. The fact, however, that the refrigerator is larger than the banana peel does not establish that arguments over larger "trash" as opposed to smaller "trash" is necessarily progress. Plenty of edit-wars can certainly be fought over large articles and large amounts of content as opposed to stubs. In fact, it is intuitive that it would be the case, since stubs are stubs precisely because they are viewed less often.

Progress is only made when there is a greater amount of good quality edits, not just greater amounts of large edits as opposed to smaller edits. Large amounts of large edits, by themselves, do not necessarily imply an increase in articles of high quality, something which isn't captured by the "independent study" cited.

"Maintenance of high-quality articles"

For this argument, the essay offers no rebuttal, which is why it's not clear why its assertions are given the title "rebuttal." It acknowledges that high-quality articles are poorly maintained. It disputes that this reflects Wikipedia failure by use of a straw man: The argument was not against random acts of obvious vandalism done in bad-faith, but continual destruction of high-quality articles which may in fact even have been done in good-faith. If it acknowledges the fact that high-quality articles are regularly poorly maintained, then it should address that fact and not dispute it by bringing up the irrelevant fact that random and blatantly obvious vandalism is rare. This essay, in fact, agrees with that point and makes no assertion otherwise.

Lowering the bar

The idea that encyclopedias were never that reliable or useful to begin with is not an adequate defense of Wikipedia's success. If this were true, then it's not clear why anyone, especially in Wikipedia's early history, should ever have had the enthusiasm that they do if that were the case. It is often said that Wikipedia is not reliable as an academic source, but it is a "good starting point" or a "good academic reference." These assertions seem self-contradictory. A good starting point for what? A good starting point for research would be a collection of good sources, something which Wikipedia does not currently necessarily provide. After all, if its sources were accurate, then it ought to be far more accurate. The issue itself is the unreliability of the sources used. As such, it cannot be a starting point for another other than what the average person generally thinks about a topic, based on what he can dig up on it on Google, in a matter of seconds. In this regard, it is a step above Google by saving people time they would have to otherwise spend looking for some sources themselves, but it remains a step below actual encyclopedias.

Furthermore, the idea that the appearance of Wikipedia failure can somehow be a misleading statistic generated by editorial standards that are "too high," doesn't seem consistent with the wiki process, because the editorial standards are themselves generated by the wiki process, not arbitrarily determined by expert editors. If the editorial process was too strict, it would be particularly easy for there to develop a consensus around lowering the standards. Ironically, the sister essay accuses the editorial standards of being too high, while at the same time it refuses to come out and directly say, "And to address this, we think editorial standards should be lowered."

The assumption of limitless patience

Out of every assertion made in the sister essay, the "assumption of limitless patience," stands out like a sore thumb.

Critics argue: About one article a day on average becomes featured; at this rate, it will take 4,380 years for all the currently existing articles to meet FA criteria.
Wikipedians are very patient.

Wikipedia editors are not limitlessly patient. It's admittedly tough to measure a possible brain drain on Wikipedia without having information on users' credentials or educational-status. However, there are clearly a number of cases of expert editors, and editors who aren't experts but are just plain good, leaving Wikipedia because they get fed up with drama and bureaucracy. This is not to assert special pleading, but to use such cases as illustrations (see case study) of how existing Wikipedia policy is counterintuitive to good editing.

Speculations regarding German Wikipedia

It's tough to say why German Wikipedia has done so well, but one possible speculation is this: According to Kim Bruning, German Wikipedia is run as an "Adminocracy." In other words, they have a very low tolerance for trolling, and admins are given a great deal of deference in dealing with trolls and vandals. However, see below: German restrictions in 2009. Other wikis, such as Dutch Wikipedia, which are dominated more by inclusionist populism, are crumbling citation needed due to majoritarianism and bureaucracy, two things Wikipedia is not. Wikipedia administrator Leon Weber stated in May 2006:

If I see [a contributor] is publishing shit, maybe by swearing or not making sense, I warn him ...the second time he turns on, I block him. [30]

This may shock some people, but it's perfectly acceptable if, in fact, Weber is in the right when he does that. And the claim above is in direct contradiction to the flowchart which describes how to build consensus.

Why isn't Weber thinking of "a reasonable change that might integrate" with the troll's ideas? The answer: Because there are none, when you're dealing with trolls, in which case they should be blocked either in accordance with policy or in accordance with ignore all rules which is itself policy. According to WP:IAR itself, Jimmy has said it was the first rule and has "always" guided the development of Wikipedia.

Then again, perhaps not? Jimmy has also argued that his authority rests on an appeal to tradition:

My authority and the authority of the ArbCom does not derive from the Foundation directly but from the longstanding historical traditions of our community

— Jimmy Wales, December 2007 [31]

That remark would seem to suggest that it isn't really important to ignore the rules at all, but that the policies derived from past consensus are all that matter. Ignore all rules could be regarded as a policy to be upheld based on the claim that it is a "long-standing tradition," but that no longer seems to be the case, at least it isn't worded that way any longer.

Anyway, in that flowchart, Ignore All Rules isn't present and the result has been an unwritten policy that Ignore all rules is to be ignored itself, which favors violations of WP:NPOV, WP:RS, and WP:FRINGE.

It could be speculated then that German Wikipedia has been more effective because the role of administrators and ignore all rules in the wiki process has been more clear. The four main principles of German Wikipedia [32] are:

  1. Wikipedia is an encyclopedia
  2. A neutral point-of-view (where rationality and objectivity are both heavily emphasized)
  3. Free content
  4. No personal attacks

This is far more simple, more clear, and apparently more effective empirically, and this essay recommends all wikis follow the example of German Wikipedia and reject the absurd proposals made in Wikipedia:Wikipedia is succeeding. A more thorough review of their proposals will be published in the future.

German restrictions in 2009

The judgment that the German Wikipedia has "done so well" is a matter of opinion. There are many German articles that contain accurate facts, as reviewed for article-verification status. However, the German Wikipedia is extremely hollow and limited in its content, as easily seen by the rejection, during 2009, of many translated articles from English Wikipedia. Mainstream simple articles, such as lists of American landmarks, were rejected when just a few words were translated into awkward German phrases. What should have involved a simple article transfer, as needing just a few days of translation, became delayed by months, as people complained that the wording of a quickly translated article was "not perfect" and that "having no article would have been better" than to bother them with translations of major articles viewed hundreds of times per week. In that sense, the German Wikipedia is a failure in terms of scope, due to rejecting translated articles that aren't grammatically correct. That is a failure of being too restrictive, which caused even many simple articles to be rejected from German Wikipedia.

Written by laymen with unenforced policies

The basic problem of Wikipedia's hollow articles, or meager coverage of subjects, stems from the systemic failure of the concept: take a group of laymen, combine them with unenforced policies about articles ("that anyone can edit"), and the result should not be expected to create high-quality, expert articles, where everyone gleefully cooperates. Instead, the limited knowledge of the laymen is further hindered, or thwarted, when people realize that policy violations occur without enforcement, and the system becomes demoralizing to both many experts and laymen, alike. A system that collects chaotic information, managed in a chaotic manner, cannot be expected to generate high-quality results and have polite cooperative writers. Instead, many marginal (and some good) articles are further hacked by angry people, who have been annoyed by others behaving badly, and tempted to hack whatever possible, inserting text in chaotic places, before eventually quitting in disgust. A system "that anyone can edit" cannot, by definition, be enforced to adequately limit hacking. There are no qualification levels for so-called "trusted" users, who would lose edit-access as enforcement for detrimental behavior. Instead, many have learned to create sock-puppets, as multiple user names so they can continue to edit in many other areas, when warned to stop hack-edits they have been caught making. There are no trust-levels to empower users to edit more after they have shown a pattern of reasonable behavior. As a result, many articles written by experts (such as medical students, nurses, or computer scientists) have become hacked, by wikirot, to hollow out the professional or expert details, as slashed by angry or untrustworthy people pushing peculiar ideas. It is so predictable: angry unrestrained people will, invariably, generate many problems, in many formerly expert articles, and so they have. Quality control requires control.

Wikipedia became battleground playpen

As a result of the tolerance for angry, unrestrained writers, the chaos of Wikipedia has driven off many serious people who prefer progress as a succession of solid, accomplished articles. Instead, Wikipedia became the battleground for angry people and the playpen of carefree writers. The people who remained to re-write Wikipedia were focused in 2 groups:

  • intense people who treat Wikipedia as a battleground and spend time changing many articles to emphasize their opinions; or
  • carefree, fun-loving people who ignore controversies, while dabbling in entertaining subjects.

The results can be quite predictable: the intense people will try to focus on serious subjects, with an intense mindset to push various fringe ideas. Do not expect serious articles to remain neutral, but rather, to be rewritten with intensity. Meanwhile, the carefree people will focus in non-controversial pop-culture articles or other fun topics.

The intense people can be expected to edit many articles: the intensity of their ideas is the same intensity to re-write many articles from their viewpoints. They will make time to slant many articles and try to overwhelm any opposing ideas. They will, they really will. And don't try to stop them.

The carefree people will push for fun or frivolity, in related articles. A likely result is that humor, puns or comedic irony will appear in their writings, either as related quotes or slipped into the article text.

The authority is not always trustworthy

The viewpoint of a reliable source is relative to the observer, the Wikipedia administrative editorial isn't an exception either. To illustrate; Wikipedia is heavily influenced by what 'reliability' means in terms of being an encyclopedia to e.g. a university's standards ('a reliable source must not be user edited or original research'). Correlation of trustworthy information is not necessarily an indicacy of reliability. Philosophically does reliability even exist? It is a pattern of reoccurences and can also be a positive feedback loop. In terms of a postive feedback loop, sociologically, a positive feedback loop can suddenly cause a system to fall under ('the bridge seems trustworthy to walk on let's continue walking on it. Suddenly the bridge falls'). The analogy is that the source's reputation for being 'right' and continues to be 'right' can suddenly be proven to be wrong; it is looked back on making the person who cited it believe it to be wrong. Recently, the Big Bang theory has been challenged, and further back in history people used to believe the Sun went around the Earth rather than the other way around and were persecuted for believing in the later viewpoint that the Earth goes around the Sun. Philosphically, this expands further in relation to the philosophy of knowledge.

References

  1. ^ "Wikipedia Celebrates 750 Years Of American Independence". 2010-06-26. Retrieved 2014-03-24.
  2. ^ See Wikipedia:List of hoaxes on Wikipedia for examples.
  3. ^ The Decline of Wikipedia: Even As More People Than Ever Rely on It, Fewer People Create It | MIT Technology Review
  4. ^ Halfaker, A.; Geiger, R. S.; Morgan, J. T.; Riedl, J. (2012). "The Rise and Decline of an Open Collaboration System: How Wikipedia's Reaction to Popularity Is Causing Its Decline". American Behavioral Scientist. 57 (5): 664. doi: 10.1177/0002764212469365. ISSN  0002-7642.
  5. ^ See WP:Anti-elitism
  6. ^ English Wikipedia statistics, English Wikipedia. Retrieved on 2008-01-24.
  7. ^ Wikipedia statistics – New Wikipedians, English Wikipedia. Retrieved on 2008-01-24.
  8. ^ "Wikipedia Statistics – Site Map". Retrieved 2008-01-25.
  9. ^ Gardner, Sue (19 February 2011). "Nine Reasons Why Women Don't Edit Wikipedia, In Their Own Words". suegardner.org (blog).
  10. ^ Andrew Lih (20 June 2015). "Can Wikipedia Survive?". www.nytimes.com. Washington. Retrieved 21 June 2015. ...the considerable and often-noted gender gap among Wikipedia editors; in 2011, less than 15 percent were women.
  11. ^ Statistics based on Wikimedia Foundation Wikipedia editor surveys 2011 (Nov. 2010-April 2011) and November 2011 (April - October 2011)
  12. ^ Wikipedia 'completely failed' to fix gender imbalance, BBC interview with Jimmy Wales, August 8, 2014; starting at 45 seconds.
  13. ^ Emigh & Herring (2005) "Collaborative Authoring on the Web: A Genre Analysis of Online Encyclopedias", Proceedings of the Thirty-Eighth Hawai'i International Conference on System Sciences. ( PDF)
  14. ^ Priedhorsky, Chen, Lam, Panciera, Terveen, Riedl. "Creating, Destroying, and Restoring Value in Wikipedia" (PDF). Retrieved 2008-01-25.{{ cite web}}: CS1 maint: multiple names: authors list ( link)
  15. ^ Stephen Cauchil (2005-12-15). "Online encyclopedias put to the test". The Age. Melbourne. Retrieved 2008-01-25.
  16. ^ Orlowski, Andrew (2006-03-26). "Nature mag cooked Wikipedia study". The Register. Retrieved 2008-01-25.
  17. ^ "Fatally Flawed" (PDF). Encyclopædia Britannica. March 2006. Retrieved 14 July 2007.
  18. ^ "Britannica attacks". Nature. 440 (7084): 582. 2006-03-30. doi: 10.1038/440582b. PMID  16572128. Retrieved 2006-07-14.
  19. ^ "Emerald: Article Request". Retrieved 2008-02-19.
  20. ^ Fernanda Viégas; Martin Wattenberg; Kushal Dave. "Studying Cooperation and Conflict between Authors with history flow Visualizations" (PDF). MIT. {{ cite journal}}: Cite journal requires |journal= ( help)
  21. ^ Anthony, Denise; Smith, Sean W.; Williamson, Tim. "The Quality of Open Source Production: Zealots and Good Samaritans in the Case of Wikipedia". Dartmouth University. Retrieved 2008-01-24.
  22. ^ Larry Greenemeier. "Wikipedia "Good Samaritans Are on the Money". Scientific American. Retrieved 2008-01-24.
  23. ^ David Drake. "Dartmouth Wikipedia Study Flawed But Still Valuable". Scientific American. Retrieved 2008-01-24.
  24. ^ Besiki Stvilla; Michael Twidale; Linda Smith; Les Gasser (Oct 2007). "Information Quality Work Organization in Wikipedia" (PDF). Florida State University. pp. (on collaborative quality control), 38 pages, 650kb PDF. Retrieved 2009-01-22.
  25. ^ a b Jimmy Wales, July 2004 (2004-07-24). "" Wikipedia Founder Jimmy Wales Responds"". Slashdot. Retrieved 2008-01-24.{{ cite web}}: CS1 maint: numeric names: authors list ( link)
  26. ^ a b Jimmy Wales 2004 (2004-07-24). ""C-SPAN Interview"". C-SPAN. Retrieved 2008-01-24.{{ cite web}}: CS1 maint: numeric names: authors list ( link)
  27. ^ Jimmy Wales, December 2005 (2005-12-14). "Wikipedia: "A Work in Progress"". BusinessWeek. Retrieved 2008-01-24.{{ cite web}}: CS1 maint: numeric names: authors list ( link)
  28. ^ Jimmy Wales, 2006 (2006-05-16). ""Zero information is preferred to misleading or false information"". WikiEN-l electronic mailing list archive. Retrieved 2008-01-24.{{ cite web}}: CS1 maint: numeric names: authors list ( link)
  29. ^ This quote was used with User:Bjweeks' permission.
  30. ^ Peter Munro, 2005 (2005-09-20). "Life, the universe and Wiki". Sydney Morning Herald. Retrieved 2008-01-24.{{ cite news}}: CS1 maint: numeric names: authors list ( link)
  31. ^ Jimmy Wales, 2007 (2007-12-26). "Life, the universe and Wiki". Sydney Morning Herald. Retrieved 2008-02-04.{{ cite web}}: CS1 maint: numeric names: authors list ( link)
  32. ^ "Wikipedia: Basic Principles, German Wikipedia". Retrieved 2008-01-25.

See also

Article space

Project space

User space

External links

From Wikipedia, the free encyclopedia

Note: Unless stated otherwise or directly sourced, the statistics used in this essay were collected in February 2007 and may now be out-of-date.

Contrast with essay: Wikipedia:Wikipedia is succeeding. See also Wikipedia:Failure, on the virtues of failure.

Is Wikipedia failing in its aim of becoming a reputable, reliable encyclopedia? Here are some illustrations of ways in which it is not fulfilling that aim.

Assumptions

To assess the quality of Wikipedia's articles, some assumptions are necessary. Here it is assumed that:

  • The criteria defined by the Wikipedia 1.0 editorial team at {{ grading scheme}} accurately reflect the quality of the articles to which these ratings have been applied.
  • That articles that are neither FA nor A-class fall below the standards that an encyclopedia should demand of its content (possibly with the exception of some WikiProjects that have chosen to have GA above A-class instead of the other way round).
  • That the sample of 300,000 articles assessed, with results listed at WP:1.0/I, is representative of the whole encyclopedia.
  • The definition of an encyclopedia in Wikipedia's article encyclopedia as a "compendium of human knowledge" is correct.

Criteria which indicate substantial failings

Performance on core topics

Success is not guaranteed.

Vital articles lists 988 articles on topics that can be considered essential. These topics should have articles of the very highest quality – ideally a featured article. So do they? In fact, of those 988, only 81 are featured articles and 9 are A-class. This means that 89% of the essential topics that should have excellent articles fall short of the standard, assuming that all vital articles that meet the FA criteria have been nominated for FA status.

Do they fall short by a long way? 65 are listed as good articles, which, according to Template:Grading scheme, means that 'other encyclopedias could do a better job'. Some editors have criticised the GA process as inconsistent and arbitrary, so the quality of those articles is further in doubt. The remaining 833 are B-class, C-class, stub-class or start-class on the assessment scale; this indicates that many articles require substantial work before they will match or exceed the standards found in other encyclopedias.

On current trends, how long will it take before all the Vital Articles are featured or A-class articles? On 1 January 2006, 41 of them were featured; by 1 January 2007, this had risen to 71. By 1 July 2011 (four and a half years later) the number was 90, indicating a sharp decrease in promotion rate. Even assuming that the current rate (19 articles in 54 months) declines no further, at this rate of approximately four a year it will take 225 years for all of the vital articles to reach the standards expected of them.

Performance on broader topics

There are 6,464 featured articles now. There are also 39,389 good articles. However, there are currently 6,803,974 articles on Wikipedia. This means that slightly more than 99.26% of all the articles on Wikipedia have not yet been assessed as featured or good articles. In many cases this is because they are not considered well-written, verifiable, broad, or comprehensive in their coverage. The results of the largest-scale assessment of Wikipedia content, covering 18% of the total number of articles, can be found at WP:1.0/I. These results show that 0.7% of assessed articles are either FAs or A-class articles.

One useful, informal exercise for a reader is to critically read ten random articles. The numbers above suggest that on average, you'd expect to find one FA or A-class article in every 143 articles you looked at (based on WP:1.0/I), or every 762 (based on total numbers of FAs and A-class articles).

Maintenance of standards

Wikipedia Celebrates 750 Years Of American Independence
"…While Wikipedia's "American Independence" page remains available to all site visitors, administrators have suspended additions and further edits to its content due to vandalism." The Onion July 26, 2006 [1]

Do articles which are judged to have reached the highest standards remain excellent for a long time, or do standards decline as well-meant but poor quality edits cause standards to fall over time? There are currently 340 former featured articles, so that more than 20% of all articles that have ever been featured are no longer featured.

Many editors observe that an FA that is not actively maintained inevitably declines; for an example see Ryanair, which attracts large numbers of highly biased edits which have wrecked a formerly excellent article. Sun's lead section was reduced to a few short sentences by an editor who either hadn't read or didn't understand the guidelines on what a lead section is supposed to be, and no-one has restored the previously existing summary. A whole section of Mauna Loa was removed by a vandal in November, and was not restored for a month. Generally, if the primary author of an FA does not take care of it, checking changes up to several times a day, it is likely to have its quality compromised by unnoticed vandalism or, far more damaging in the long term, well-intentioned but poor quality edits.

Some or many articles may lose featured article status because they do not meet current standards, rather than because they have declined in quality. Without case-by-case analysis it is impossible to say what proportion this is the case for. However, we can note that the featured article review process has not been as successful as would be ideal at encouraging featured articles to improve in line with rising standards.

Meanwhile, on the other hand, certain hoax articles containing blatantly incorrect information can stay up for years [2]. In January 2008, the longest-running hoax ever was discovered, the article on Brahmanical See, which was a hoax claiming that Hinduism has a Pope. This hoax existed for roughly 3½ years.

Rate of quality article production

Many argue that Wikipedia is a work in progress and that, given time, all articles will reach very high standards. Unfortunately, this is not borne out by the rate at which articles are currently being judged to meet featured article criteria. About one article a day on average becomes featured; at this rate, it will take 4,380 years for all the currently existing articles to meet FA criteria. If the current approximately exponential growth rate of Wikipedia (which will see it double in size in about the next 500 days) continues, then on current trends there will never be a time when all articles have been promoted to featured article status.

Should we even expect all articles to meet the featured article criteria? A majority of people who commented on one earlier discussion felt that the featured article criteria do indeed define the standards that all articles need to meet.

Is WP:FA a bottleneck? The rate at which articles have been promoted has remained more or less constant for well over a year (see WP:GAS), while article creation rates have increased exponentially throughout that time. If the system prevents large numbers of quality articles from being recognised as such, then that indicates that some kind of reform of the system is necessary.

Special:Recentchanges provides evidence that the rate of addition of substantial encyclopaedic content is low. You may find it informative to look at the last 200 recent changes and count how many of them are directly building the encyclopedia. That means observing reasonably sound content being added (and not under a 'trivia' header) to an article that is not a borderline AFD candidate. (The info on bytes added/removed narrows the search quite quickly.) Typically this reveals less than ten substantive article-space change in 200. One such analysis can be found at User:Opabinia regalis/Article statistics.

The strength and size of the core community

From 2007 to 2012, the total number of active Wikipedia editors has gradually declined. [3] [4]

One of the most important aspects of Wikipedia is what could be called its "core community," as distinguished from the "community-at-large." The distinction is that the community-at-large is composed of everyone who edits Wikipedia, while the core community is the small group of veteran editors who regularly watch policy pages, facilitate the administration of various aspects of the site (such as mediation, the help desk, the reference desk, the various noticeboards, etc..). Earlier in Wikipedia's history, there was a project called Esperanza designed to strengthen Wikipedia's core community. In mid-2007, Esperanza was disbanded after facing criticism. At first, other projects were started to fulfill the same role that Esperanza did. Several of these projects have since been abandoned.

Projects stemming from Esperanza which were abandoned:

Projects stemming from Esperanza which are still ongoing:

Then there is also Expert rebellion and Expert retention to consider. Assuming the information in both articles is roughly correct, it appears that Wikipedia has experienced or is experiencing a brain drain, which of course is detrimental to its success. This criticism has been expressed before by a number of people [5] and the likelihood of this brain drain has not been explored in current statistical analysis. However, one disturbing statistic is the sudden drop-off in new users in October 2006. Every month since Wikipedia was started, its userbase grew exponentially. In the month of October, 2006, however, the growth of its userbase slowed. [6] Was this statistical noise or not? Well, there have been similar drop-offs in user growth across several Wikipedias, suggesting it is not. [7] In each case, as soon as a drop-off occurs, Wikimedia ceases publishing statistics on user growth. Why this has occurred is not clear and no new statistics have been provided as of September, 2007. One possible explanation is stated on Wikimedia's main page on statistics: [8]

Since a year it has become increasingly difficult to produce valid dumps for the largest wikipedias. Until that problem is fixed some figures will be outdated.

This problem, stemming from either financial shortfalls or incompetence, is itself a demonstration of Wikipedia failure. If Wikimedia isn't even capable of regularly collecting and compiling statistics on Wikipedia's success, then what can it do and how can we expect the Wikipedia project to succeed?

Questioning these criteria

Is it a bad idea to use Featured Article or Good Article status as criteria for judging the number of excellent articles in Wikipedia? It is possible that many or most articles that meet the featured article criteria or good article criteria have not been officially reviewed, because review is a time-intensive process that often suffers from a backlog of nominated articles. The Good Article process historically has had a much less rigorous promotion process than the featured article process, so some editors reject it as a measure of article quality. In addition, the editors who work on articles that could potentially pass either the "good" or "featured" criteria may decline to participate in those processes because they see them as bureaucratic, unpleasant, and predominantly run by non-experts in specialized topics.

If these processes do not succeed in recognizing quality content, then this may be a failure of Wikipedia to perform accurate self-assessment rather than a failure to produce quality articles.

Gender bias

Former Wikimedia Foundation executive Sue Gardner provided nine reasons, offered by female Wikipedia editors, "Why Women Don't Edit Wikipedia." [9]

The issue of gender bias on Wikipedia also known as the gender gap or gender imbalance, is the finding that between 84 and 91 percent of Wikipedia editors are male, [10] [11] which leads to systemic bias. It is one of the criticisms of Wikipedia. The Wikipedia community has acknowledged the problem and is attempting to narrow this gender gap. In August 2014, Wikipedia co-founder Jimmy Wales announced in a BBC interview the Wikimedia Foundation's plans for "doubling down" on the issue of gender bias at Wikipedia. Wales said the Foundation would be open to more outreach and more software changes. [12]

Food for thought

If Wikipedia just aimed to be a social website where people with similar interests could come together and write articles about anything they liked, it would certainly be succeeding. However, its stated aim is to be an encyclopedia, and not just that but an encyclopedia of the highest quality. Almost two decades of work have resulted in 3,000 articles of good or excellent quality, at which rate it will take many more decades to produce the quantity of good or excellent articles found in traditional encyclopedias. Over millions of articles are mediocre to poor in quality.

Open questions

  • Has the system failed to produce a quality encyclopedia? If so, why?
  • Is change necessary?
  • If it is, then is radical change required, or just small adjustments to the current set-up?
  • Does this matter, given that Wikipedia is one of the most popular websites in the world?
  • Does popularity establish authenticity?
  • What is Wikipedia really, and what do we want it to be?
  • Are the statistical measures introduced here relevant to the conclusions drawn?
  • Are Wikipedia's own criteria for success accurately reflected here?
  • Are the Featured Article and Good Article designations useful for determining the number of quality articles in Wikipedia? If they are not, how can they be reformed?
  • At what rate is the number of new user accounts increasing?
  • Does the number of active users increase in the same way as new user accounts, or do significant numbers of editors leave the project?
  • Could it help any to introduce one or more of the following:
  1. A clearer vision and mission statement, prominently displayed?
  2. Better defined performance metrics for articles or edits?
  3. Voting, as popularity, or by "distinguished members" (opening a can of chicken/egg soup here)?
  4. Profiling authors/editors to identify promising candidates or repeat offenders so as to offer them voluntary coaching/mentoring (on private channels, not in public on discussion pages)?
  5. Offering references to alternative sites so as to channel creative energies of writers who repeatedly fail to meet Wikipedia criteria?

Responses to alleged rebuttals and inadequate responses

The sister essay contains a great deal of "rebuttals" and "responses" to this essay based upon a number of sources. This essay concludes that these alleged "rebuttals" in the sister essay are weak and its responses are inadequate. It relies on pseudoscientific statistical special pleading which seems to both stem from and play on the general public's misconception of appropriate statistical analysis and science. This is one the root causes of problems on Wikipedia to begin with. The poor quality of the Wikipedia:Wikipedia is succeeding is itself a reflection of the systemic problems of pseudoscience and fringe theories on Wikipedia.

Clarifying possible misconceptions

A popular misconception among the public, which is also encouraged by the media, is the claim, "A scientist proved it in a study, so it must be true!" Science relies on proper methodology, objectivity, and replicability, among other things. The sister essay invokes a handful of studies without addressing criticism of their methodology or the fact that they arguably haven't been replicated. In particular, the single-study by Nature is held up as the holy grail proof that Wikipedia is as accurate as Britannica, though its specific methodology is ignored, peer-reviews aren't cited, and so on.

In addition, it is important to clarify that correlation does not imply causation, another fallacy the essay seems to evoke by repeatedly evangelizing about Wikipedia's growth without narrowing in on where that growth has happened and how, that is, specifying the cause.

Outside scientific studies confirming Wikipedia failure

First of all, it should be clarified that the burden of proof rests on those making positive assertions, which includes "Wikipedia is succeeding." In a 2005 study, Emigh and Herring note that there are not yet many formal studies of Wikipedia or its model, and suggest that Wikipedia achieves its results by social means— self-norming, a core of active users watching for problems, and expectations of encyclopedic text drawn from the wider culture. [13] Such assumptions regarding "self-norming" appear to bear similarities with pseudoscientific Marxist and anarchist assumptions about human behavior under socialism and anarchism. While Wikipedia model has not been evaluated, a broad array of research in psychology, sociology, and economics would likely undermine it. Specifically in psychology, cognitive biases (which are made worse, not better through group participation, see groupthink), in sociology group dynamics, and in economics, there are things to consider such as information asymmetry and bounded rationality.

With this in mind, it should also be clarified that the assertion that Wikipedia failure has not in any way been validated by any outside studies is patently false. Just as the definition of "success," has been skewed, if we inappropriately define "failure," as "total apocalyptic, nightmarish collapse," then no, that hasn't happened yet, so of course it hasn't been proven. But failure is more accurately defined by the question: "Is Wikipedia moving in the right direction"? According to a number of studies, such as the study by the University of Minnesota, no. [14]

Their abstract reads:

Wikipedia’s brilliance and curse is that any user can edit any of the encyclopedia entries. We introduce the notion of the impact of an edit, measured by the number of times the edited version is viewed. Using several datasets, including recent logs of all article views, we show that frequent editors dominate what people see when they visit Wikipedia, and that this domination is increasing. Similarly, using the same impact measure, we show that the probability of a typical article view being damaged is small but increasing, and we present empirically grounded classes of damage. Finally, we make policy recommendations for Wikipedia and other wikis in light of these findings.

Their specific policies recommendations:

It is likely that vandals will continue working to defeat the bots, leading to an arms race. Thus, continued work on automatic detection of damage is important. Our results suggest types of damage to focus on; the good news is that the results show little subtlety among most vandals. We also generally believe in augmentation, not automation. That is, we prefer intelligent task routing approaches, where automation directs humans to potential damage incidents, but humans make the final decision.

This proposal has been completely ignored, but it is a proposal that has merit. For instance, in order to address the problem of unreliable sources and fringe views, there could be the creation of a " greylist" which automatically generates a list of articles which likely contain inappropriate edits, based upon the likelihood of certain sources to be regularly misused again and again. This could more appropriately address extreme violations of WP:NPOV and WP:V, which are not captured by bots, while at the same time allowing humans to make the final decision as to what constitutes a "reliable source" or not.

As for the study by Nature magazine suggesting Wikipedia is as reliable as Encyclopedia Britannica, [15] this essay rejects that study on the grounds that it invokes the same flawed methodological assumptions in the sister essay. Furthermore, it is possible that the more specific claims made by The Register regarding the study may have merit:

"…Nature sent only misleading fragments of some Britannica articles to the reviewers, sent extracts of the children's version and Britannica's 'book of the year' to others, and in one case, simply stitched together bits from different articles and inserted its own material, passing it off as a single Britannica entry." [16]

Encyclopedia Britannica rejected the study and while their analysis is unreliable for obvious reasons of bias, their claims do support a possible hypothesis, which itself bolsters the conclusion of this essay. Britannica denied the validity of the Nature study, claiming that it was "fatally flawed" on the grounds that the Britannica extracts were compilations that sometimes included articles written for the youth version. [17] Nature acknowledged the compiled nature of some of the Britannica extracts, but disputed the claim that this invalidated the conclusions of the study. [18] Encyclopedia Britannica also argued that the Nature study showed that while the error rate between the two encyclopedias was similar, a breakdown of the errors indicated that the mistakes in Wikipedia were more often the inclusion of incorrect facts, while the mistakes in Britannica were "errors of omission".

There was also a study in the journal Reference Services Review which found that Wikipedia is less reliable than other references. From their abstract: [19]

The study did reveal inaccuracies in eight of the nine entries and exposed major flaws in at least two of the nine Wikipedia articles. Overall, Wikipedia's accuracy rate was 80 percent compared with 95–96 percent accuracy within the other sources. This study does support the claim that Wikipedia is less reliable than other reference resources.

Supporters of the claim that WP:Wikipedia is succeeding may cite the study by Fernanda Viégas of the MIT Media Lab and Martin Wattenberg and Kushal Dave of IBM Research which found that most vandal edits were reverted within around five minutes. [20] However, this isn't a particularly controversial or striking conclusion, nor is it particularly relevant. The same conclusion was reached by the researchers of the University of Minnesota. From a sociological perspective, Wikipedia's ability to prevent obvious vandalism is intriguing, but that alone is not how Wikipedia's success is defined, since the problems stem from systemic bias and erosion of good content, which, unlike random vandalism, cannot simply be addressed through the use of large networks of bots crawling Wikipedia and making automatic reverts according to a set algorithm. The development of such a network of bots, according to the University of Minnesota, is largely one reason why blatant vandalism is difficult on Wikipedia.

A study by Dartmouth College found that there are perverse incentives involved in the way Wikipedia works, such that good editors find absolutely no social rewards for good editing while paradoxically a great deal of good editors, which are not vandals or trolls, only edit Wikipedia a few times then leave. [21] This offers an empirical basis for the hypothesis of brain drain and the idea that many potentially good contributors are regularly turned away by Wikipedia's chaotic process, while the best veteran editors have to struggle to keep believing in the project, when they continually face so many unnecessary obstacles to appropriate edits. original research? unbalanced opinion? This study has been criticized as having a flawed methodology, but it was received favorably by the Scientific American community, despite its flaws. [22] [23]

An academic study of Wikipedia articles, circa September 2007, also found that the level of debate among Wikipedia editors on controversial topics often degenerated into counterproductive squabbling: "For uncontroversial, 'stable' topics self-selection also ensures that members of editorial groups are substantially well-aligned with each other in their interests, backgrounds, and overall understanding of the topics...For controversial topics, on the other hand, self-selection may produce a strongly misaligned editorial group. It can lead to conflicts among the editorial group members, continuous edit wars, and may require the use of formal work coordination and control mechanisms. These may include intervention by administrators who enact dispute review and mediation processes, [or] completely disallow or limit and coordinate the types and sources of edits." [24]

In conclusion, this essay finds support in studies conducted by the University of Minnesota, the University of Dartmouth, the University of Florida, and the study published in Reference Services Review.

An absolute definition of Wikipedia success

"Absolute" statistics in scientific analysis are generally to be regarded with immediate skepticism, because they do not measure continuous rates of change, only self-selected variables at certain fixed points in time ("snapshots"). The sister essay's unique definition of "success," appears to differ widely from the goal of the Wikipedia project explicitly stated in Wikimedia's Mission statement, early statements made about Wikipedia made by Jimbo Wales, and the definition of encyclopedia itself as a "compendium of human knowledge." Based on these definitions, one cannot argue that Wikipedia is succeeding because it has managed to develop an absolute amount of content that furthers that goal, while overall the majority of its resources go against the goal, through encouraging public ignorance and misinformation, and wasting resources on hosting unencyclopedic information on a website calling itself an "encyclopedia." The rebuttal also carelessly invokes the massive growth of Wikipedia without noting exactly where that growth has happened. This essay acknowledges the growth of Wikipedia but asserts that such growth is not happening in the right places and in the right ways.

According to Wikimedia's mission statement, the goal of Wikipedia is "to empower and engage people around the world to collect and develop educational content... ...and to disseminate it effectively and globally."

The definition of "success," appears to have been downgraded recently in response to the clear evidence that Wikipedia is failing. The argument, then, is not that "Wikipedia is succeeding," but that "Encyclopedias were never that useful to begin with." In response to criticism, Wikipedia should change, not lower the bar.

This is demonstrated by comparing earlier statements made by Jimbo Wales with more recent cynicism and skepticism that got greater and greater as time went on.

Imagine a world in which every single person on the planet is given free access to the sum of all human knowledge. That's what we're doing.

— Jimmy Wales, July 2004 [25]

I frequently counsel people who are getting frustrated about an edit war to think about someone who lives without clean drinking water, without any proper means of education, and how our work might someday help that person. It puts flamewars into some perspective, I think.

— Jimmy Wales, July 2004 [25]

We help the internet not suck.

— Jimmy Wales, September 2005 [26]

Our goal has always been Britannica or better quality. We don't always achieve that.

— Jimmy Wales, September 2005 [26]

No, I don't think people should cite [Wikipedia], and I don't think people should cite Britannica, either – the error rate there isn't very good. People shouldn't be citing encyclopedias in the first place.

— Jimmy Wales, September 2005 [27]

I can NOT emphasize this enough. There seems to be a terrible bias among some editors that some sort of random speculative 'I heard it somewhere' pseudo information is to be tagged with a 'needs a cite' tag. Wrong. It should be removed, aggressively, unless it can be sourced. This is true of all information, but it is particularly true of negative information about living persons.

— Jimmy Wales, May 2006 [28]

In a discussion on IRC, on January 27, 2008, User:Bjweeks, who was defending Wikipedia's credibility stated:

People want to know about Harry Potter's gay teacher, that is why people use Wikipedia.

—  User:Bjweeks Wikipedia IRC [29]

The sister essay to this article relies on an "absolute measure" of Wikipedia's success, which doesn't seem to be quite what Jimbo originally had in mind when he founded Wikipedia along with the Wikimedia foundation's original definition of "success," or even the claim that Wikipedia should be "Britannica or better quality." To have the same quality as Britannica would involve having the same marginal rate of accuracy, a relative measure of success, not the some arbitrary absolute amount of encyclopedic content. The absurdity of an "absolute" measure of success or quality can be demonstrated through the following analogy: Let's say you have a book not owned by Britannica, but which contains 100% every piece of information contained with Encyclopedia Britannica. Let us then say that you place this book on top of a landfill, which is continually growing and even often defended because of the policy WP:PAPER. What you end up with is this:

Non-featurable content isn't necessarily "garbage," but the analogy fits because it's not the kind of stuff that a traditional encyclopedia editor could take and put a rubber-stamp on it, to say, "This is good encyclopedic material!"

To look at that and say, "Wikipedia is succeeding! I mean, look, all the information in Britannica is there and besides, in German Wikipedia, the landfill isn't growing!" is absurd. If Wikipedia were to delete everything other than the articles which are currently featured and then focus on creating articles worth featuring, then it would be sensible to call it an encyclopedia that is roughly comparable in accuracy to Britannica.

The essay regularly engages in such special pleading, including the following assertion:

"...many B- and Start-level articles are indeed superior to their counterparts in standard encyclopedias, such as the Encyclopædia Britannica. For example, the coverage of the B-level article, Secondary structure, a core topic in protein science, is far superior to its coverage in the Britannica..."

Many humans have red hair, therefore most or all humans have red hair? This assertion doesn't follow, not to mention that it invokes an ambiguous subjective opinion which shouldn't be found in value-free scientific analysis.

Growth, by itself, is not necessarily a good thing if the "growth" is in unencyclopedic material rather than encyclopedic material. If an article is "roughly comparable" in quality in Britannica, then it is fair to assume that it should be featured.

Wikipedia success is qualitative, not quantitative

The sister essay seems to assume that Wikipedia success is quantitative rather than qualitative. Wikipedia success is defined, not by the quantity of the articles, but by the quantity of high-quality articles. As an example, they cite an "independent test".

To demonstrate the above point, we return to the "landfill" analogy. Two common objects found in landfills are banana peels and refrigerators, the latter being far larger than the former. The fact, however, that the refrigerator is larger than the banana peel does not establish that arguments over larger "trash" as opposed to smaller "trash" is necessarily progress. Plenty of edit-wars can certainly be fought over large articles and large amounts of content as opposed to stubs. In fact, it is intuitive that it would be the case, since stubs are stubs precisely because they are viewed less often.

Progress is only made when there is a greater amount of good quality edits, not just greater amounts of large edits as opposed to smaller edits. Large amounts of large edits, by themselves, do not necessarily imply an increase in articles of high quality, something which isn't captured by the "independent study" cited.

"Maintenance of high-quality articles"

For this argument, the essay offers no rebuttal, which is why it's not clear why its assertions are given the title "rebuttal." It acknowledges that high-quality articles are poorly maintained. It disputes that this reflects Wikipedia failure by use of a straw man: The argument was not against random acts of obvious vandalism done in bad-faith, but continual destruction of high-quality articles which may in fact even have been done in good-faith. If it acknowledges the fact that high-quality articles are regularly poorly maintained, then it should address that fact and not dispute it by bringing up the irrelevant fact that random and blatantly obvious vandalism is rare. This essay, in fact, agrees with that point and makes no assertion otherwise.

Lowering the bar

The idea that encyclopedias were never that reliable or useful to begin with is not an adequate defense of Wikipedia's success. If this were true, then it's not clear why anyone, especially in Wikipedia's early history, should ever have had the enthusiasm that they do if that were the case. It is often said that Wikipedia is not reliable as an academic source, but it is a "good starting point" or a "good academic reference." These assertions seem self-contradictory. A good starting point for what? A good starting point for research would be a collection of good sources, something which Wikipedia does not currently necessarily provide. After all, if its sources were accurate, then it ought to be far more accurate. The issue itself is the unreliability of the sources used. As such, it cannot be a starting point for another other than what the average person generally thinks about a topic, based on what he can dig up on it on Google, in a matter of seconds. In this regard, it is a step above Google by saving people time they would have to otherwise spend looking for some sources themselves, but it remains a step below actual encyclopedias.

Furthermore, the idea that the appearance of Wikipedia failure can somehow be a misleading statistic generated by editorial standards that are "too high," doesn't seem consistent with the wiki process, because the editorial standards are themselves generated by the wiki process, not arbitrarily determined by expert editors. If the editorial process was too strict, it would be particularly easy for there to develop a consensus around lowering the standards. Ironically, the sister essay accuses the editorial standards of being too high, while at the same time it refuses to come out and directly say, "And to address this, we think editorial standards should be lowered."

The assumption of limitless patience

Out of every assertion made in the sister essay, the "assumption of limitless patience," stands out like a sore thumb.

Critics argue: About one article a day on average becomes featured; at this rate, it will take 4,380 years for all the currently existing articles to meet FA criteria.
Wikipedians are very patient.

Wikipedia editors are not limitlessly patient. It's admittedly tough to measure a possible brain drain on Wikipedia without having information on users' credentials or educational-status. However, there are clearly a number of cases of expert editors, and editors who aren't experts but are just plain good, leaving Wikipedia because they get fed up with drama and bureaucracy. This is not to assert special pleading, but to use such cases as illustrations (see case study) of how existing Wikipedia policy is counterintuitive to good editing.

Speculations regarding German Wikipedia

It's tough to say why German Wikipedia has done so well, but one possible speculation is this: According to Kim Bruning, German Wikipedia is run as an "Adminocracy." In other words, they have a very low tolerance for trolling, and admins are given a great deal of deference in dealing with trolls and vandals. However, see below: German restrictions in 2009. Other wikis, such as Dutch Wikipedia, which are dominated more by inclusionist populism, are crumbling citation needed due to majoritarianism and bureaucracy, two things Wikipedia is not. Wikipedia administrator Leon Weber stated in May 2006:

If I see [a contributor] is publishing shit, maybe by swearing or not making sense, I warn him ...the second time he turns on, I block him. [30]

This may shock some people, but it's perfectly acceptable if, in fact, Weber is in the right when he does that. And the claim above is in direct contradiction to the flowchart which describes how to build consensus.

Why isn't Weber thinking of "a reasonable change that might integrate" with the troll's ideas? The answer: Because there are none, when you're dealing with trolls, in which case they should be blocked either in accordance with policy or in accordance with ignore all rules which is itself policy. According to WP:IAR itself, Jimmy has said it was the first rule and has "always" guided the development of Wikipedia.

Then again, perhaps not? Jimmy has also argued that his authority rests on an appeal to tradition:

My authority and the authority of the ArbCom does not derive from the Foundation directly but from the longstanding historical traditions of our community

— Jimmy Wales, December 2007 [31]

That remark would seem to suggest that it isn't really important to ignore the rules at all, but that the policies derived from past consensus are all that matter. Ignore all rules could be regarded as a policy to be upheld based on the claim that it is a "long-standing tradition," but that no longer seems to be the case, at least it isn't worded that way any longer.

Anyway, in that flowchart, Ignore All Rules isn't present and the result has been an unwritten policy that Ignore all rules is to be ignored itself, which favors violations of WP:NPOV, WP:RS, and WP:FRINGE.

It could be speculated then that German Wikipedia has been more effective because the role of administrators and ignore all rules in the wiki process has been more clear. The four main principles of German Wikipedia [32] are:

  1. Wikipedia is an encyclopedia
  2. A neutral point-of-view (where rationality and objectivity are both heavily emphasized)
  3. Free content
  4. No personal attacks

This is far more simple, more clear, and apparently more effective empirically, and this essay recommends all wikis follow the example of German Wikipedia and reject the absurd proposals made in Wikipedia:Wikipedia is succeeding. A more thorough review of their proposals will be published in the future.

German restrictions in 2009

The judgment that the German Wikipedia has "done so well" is a matter of opinion. There are many German articles that contain accurate facts, as reviewed for article-verification status. However, the German Wikipedia is extremely hollow and limited in its content, as easily seen by the rejection, during 2009, of many translated articles from English Wikipedia. Mainstream simple articles, such as lists of American landmarks, were rejected when just a few words were translated into awkward German phrases. What should have involved a simple article transfer, as needing just a few days of translation, became delayed by months, as people complained that the wording of a quickly translated article was "not perfect" and that "having no article would have been better" than to bother them with translations of major articles viewed hundreds of times per week. In that sense, the German Wikipedia is a failure in terms of scope, due to rejecting translated articles that aren't grammatically correct. That is a failure of being too restrictive, which caused even many simple articles to be rejected from German Wikipedia.

Written by laymen with unenforced policies

The basic problem of Wikipedia's hollow articles, or meager coverage of subjects, stems from the systemic failure of the concept: take a group of laymen, combine them with unenforced policies about articles ("that anyone can edit"), and the result should not be expected to create high-quality, expert articles, where everyone gleefully cooperates. Instead, the limited knowledge of the laymen is further hindered, or thwarted, when people realize that policy violations occur without enforcement, and the system becomes demoralizing to both many experts and laymen, alike. A system that collects chaotic information, managed in a chaotic manner, cannot be expected to generate high-quality results and have polite cooperative writers. Instead, many marginal (and some good) articles are further hacked by angry people, who have been annoyed by others behaving badly, and tempted to hack whatever possible, inserting text in chaotic places, before eventually quitting in disgust. A system "that anyone can edit" cannot, by definition, be enforced to adequately limit hacking. There are no qualification levels for so-called "trusted" users, who would lose edit-access as enforcement for detrimental behavior. Instead, many have learned to create sock-puppets, as multiple user names so they can continue to edit in many other areas, when warned to stop hack-edits they have been caught making. There are no trust-levels to empower users to edit more after they have shown a pattern of reasonable behavior. As a result, many articles written by experts (such as medical students, nurses, or computer scientists) have become hacked, by wikirot, to hollow out the professional or expert details, as slashed by angry or untrustworthy people pushing peculiar ideas. It is so predictable: angry unrestrained people will, invariably, generate many problems, in many formerly expert articles, and so they have. Quality control requires control.

Wikipedia became battleground playpen

As a result of the tolerance for angry, unrestrained writers, the chaos of Wikipedia has driven off many serious people who prefer progress as a succession of solid, accomplished articles. Instead, Wikipedia became the battleground for angry people and the playpen of carefree writers. The people who remained to re-write Wikipedia were focused in 2 groups:

  • intense people who treat Wikipedia as a battleground and spend time changing many articles to emphasize their opinions; or
  • carefree, fun-loving people who ignore controversies, while dabbling in entertaining subjects.

The results can be quite predictable: the intense people will try to focus on serious subjects, with an intense mindset to push various fringe ideas. Do not expect serious articles to remain neutral, but rather, to be rewritten with intensity. Meanwhile, the carefree people will focus in non-controversial pop-culture articles or other fun topics.

The intense people can be expected to edit many articles: the intensity of their ideas is the same intensity to re-write many articles from their viewpoints. They will make time to slant many articles and try to overwhelm any opposing ideas. They will, they really will. And don't try to stop them.

The carefree people will push for fun or frivolity, in related articles. A likely result is that humor, puns or comedic irony will appear in their writings, either as related quotes or slipped into the article text.

The authority is not always trustworthy

The viewpoint of a reliable source is relative to the observer, the Wikipedia administrative editorial isn't an exception either. To illustrate; Wikipedia is heavily influenced by what 'reliability' means in terms of being an encyclopedia to e.g. a university's standards ('a reliable source must not be user edited or original research'). Correlation of trustworthy information is not necessarily an indicacy of reliability. Philosophically does reliability even exist? It is a pattern of reoccurences and can also be a positive feedback loop. In terms of a postive feedback loop, sociologically, a positive feedback loop can suddenly cause a system to fall under ('the bridge seems trustworthy to walk on let's continue walking on it. Suddenly the bridge falls'). The analogy is that the source's reputation for being 'right' and continues to be 'right' can suddenly be proven to be wrong; it is looked back on making the person who cited it believe it to be wrong. Recently, the Big Bang theory has been challenged, and further back in history people used to believe the Sun went around the Earth rather than the other way around and were persecuted for believing in the later viewpoint that the Earth goes around the Sun. Philosphically, this expands further in relation to the philosophy of knowledge.

References

  1. ^ "Wikipedia Celebrates 750 Years Of American Independence". 2010-06-26. Retrieved 2014-03-24.
  2. ^ See Wikipedia:List of hoaxes on Wikipedia for examples.
  3. ^ The Decline of Wikipedia: Even As More People Than Ever Rely on It, Fewer People Create It | MIT Technology Review
  4. ^ Halfaker, A.; Geiger, R. S.; Morgan, J. T.; Riedl, J. (2012). "The Rise and Decline of an Open Collaboration System: How Wikipedia's Reaction to Popularity Is Causing Its Decline". American Behavioral Scientist. 57 (5): 664. doi: 10.1177/0002764212469365. ISSN  0002-7642.
  5. ^ See WP:Anti-elitism
  6. ^ English Wikipedia statistics, English Wikipedia. Retrieved on 2008-01-24.
  7. ^ Wikipedia statistics – New Wikipedians, English Wikipedia. Retrieved on 2008-01-24.
  8. ^ "Wikipedia Statistics – Site Map". Retrieved 2008-01-25.
  9. ^ Gardner, Sue (19 February 2011). "Nine Reasons Why Women Don't Edit Wikipedia, In Their Own Words". suegardner.org (blog).
  10. ^ Andrew Lih (20 June 2015). "Can Wikipedia Survive?". www.nytimes.com. Washington. Retrieved 21 June 2015. ...the considerable and often-noted gender gap among Wikipedia editors; in 2011, less than 15 percent were women.
  11. ^ Statistics based on Wikimedia Foundation Wikipedia editor surveys 2011 (Nov. 2010-April 2011) and November 2011 (April - October 2011)
  12. ^ Wikipedia 'completely failed' to fix gender imbalance, BBC interview with Jimmy Wales, August 8, 2014; starting at 45 seconds.
  13. ^ Emigh & Herring (2005) "Collaborative Authoring on the Web: A Genre Analysis of Online Encyclopedias", Proceedings of the Thirty-Eighth Hawai'i International Conference on System Sciences. ( PDF)
  14. ^ Priedhorsky, Chen, Lam, Panciera, Terveen, Riedl. "Creating, Destroying, and Restoring Value in Wikipedia" (PDF). Retrieved 2008-01-25.{{ cite web}}: CS1 maint: multiple names: authors list ( link)
  15. ^ Stephen Cauchil (2005-12-15). "Online encyclopedias put to the test". The Age. Melbourne. Retrieved 2008-01-25.
  16. ^ Orlowski, Andrew (2006-03-26). "Nature mag cooked Wikipedia study". The Register. Retrieved 2008-01-25.
  17. ^ "Fatally Flawed" (PDF). Encyclopædia Britannica. March 2006. Retrieved 14 July 2007.
  18. ^ "Britannica attacks". Nature. 440 (7084): 582. 2006-03-30. doi: 10.1038/440582b. PMID  16572128. Retrieved 2006-07-14.
  19. ^ "Emerald: Article Request". Retrieved 2008-02-19.
  20. ^ Fernanda Viégas; Martin Wattenberg; Kushal Dave. "Studying Cooperation and Conflict between Authors with history flow Visualizations" (PDF). MIT. {{ cite journal}}: Cite journal requires |journal= ( help)
  21. ^ Anthony, Denise; Smith, Sean W.; Williamson, Tim. "The Quality of Open Source Production: Zealots and Good Samaritans in the Case of Wikipedia". Dartmouth University. Retrieved 2008-01-24.
  22. ^ Larry Greenemeier. "Wikipedia "Good Samaritans Are on the Money". Scientific American. Retrieved 2008-01-24.
  23. ^ David Drake. "Dartmouth Wikipedia Study Flawed But Still Valuable". Scientific American. Retrieved 2008-01-24.
  24. ^ Besiki Stvilla; Michael Twidale; Linda Smith; Les Gasser (Oct 2007). "Information Quality Work Organization in Wikipedia" (PDF). Florida State University. pp. (on collaborative quality control), 38 pages, 650kb PDF. Retrieved 2009-01-22.
  25. ^ a b Jimmy Wales, July 2004 (2004-07-24). "" Wikipedia Founder Jimmy Wales Responds"". Slashdot. Retrieved 2008-01-24.{{ cite web}}: CS1 maint: numeric names: authors list ( link)
  26. ^ a b Jimmy Wales 2004 (2004-07-24). ""C-SPAN Interview"". C-SPAN. Retrieved 2008-01-24.{{ cite web}}: CS1 maint: numeric names: authors list ( link)
  27. ^ Jimmy Wales, December 2005 (2005-12-14). "Wikipedia: "A Work in Progress"". BusinessWeek. Retrieved 2008-01-24.{{ cite web}}: CS1 maint: numeric names: authors list ( link)
  28. ^ Jimmy Wales, 2006 (2006-05-16). ""Zero information is preferred to misleading or false information"". WikiEN-l electronic mailing list archive. Retrieved 2008-01-24.{{ cite web}}: CS1 maint: numeric names: authors list ( link)
  29. ^ This quote was used with User:Bjweeks' permission.
  30. ^ Peter Munro, 2005 (2005-09-20). "Life, the universe and Wiki". Sydney Morning Herald. Retrieved 2008-01-24.{{ cite news}}: CS1 maint: numeric names: authors list ( link)
  31. ^ Jimmy Wales, 2007 (2007-12-26). "Life, the universe and Wiki". Sydney Morning Herald. Retrieved 2008-02-04.{{ cite web}}: CS1 maint: numeric names: authors list ( link)
  32. ^ "Wikipedia: Basic Principles, German Wikipedia". Retrieved 2008-01-25.

See also

Article space

Project space

User space

External links


Videos

Youtube | Vimeo | Bing

Websites

Google | Yahoo | Bing

Encyclopedia

Google | Yahoo | Bing

Facebook