This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | Archive 2 | Archive 3 | → | Archive 5 |
I'm curious as to the intent of this page; do you seek to create an exhaustive list of magazines and books to use as resources? Or is this page a group of examples, in which case I wonder why it is needed in addition to the normal pages like WP:RS? ( Radiant) 13:39, 14 November 2006 (UTC)
WP:RS was substantially rewritten on 1st December 2006 ( diff). The following text, which had existed for a while and which we had borrowed for these guidelines, was lost:
Scientific journals are the best place to find primary source articles about experiments, including medical studies. Any serious scientific journal is peer-reviewed. Many articles are excluded from peer-reviewed journals because they report what is in the opinion of the editors unimportant or questionable research. In particular be careful of material in a journal that is not peer-reviewed reporting material in a different field. (See the Marty Rimm and Sokal affairs.)
The fact that a statement is published in a refereed journal does not make it true. Even a well-designed experiment or study can produce flawed results or fall victim to deliberate fraud. (See the Retracted article on neurotoxicity of ecstasy and the Schön affair.)
Honesty and the policies of neutrality and No original research demand that we present the prevailing " scientific consensus". Polling a group of experts in the field wouldn't be practical for many editors but fortunately there is an easier way. The scientific consensus can be found in recent, authoritative review articles or textbooks and some forms of monographs.
There is sometimes no single prevailing view because the available evidence does not yet point to a single answer. Because Wikipedia not only aims to be accurate, but also useful, it tries to explain the theories and empirical justification for each school of thought, with reference to published sources. Editors must not, however, create arguments themselves in favor of, or against, any particular theory or position. See Wikipedia:No original research, which is policy. Although significant-minority views are welcome in Wikipedia, the views of tiny minorities need not be reported. (See Wikipedia:Neutral Point of View.)
Make readers aware of any uncertainty or controversy. A well-referenced article will point to specific journal articles or specific theories proposed by specific researchers.
The popular press generally does not cover science well. Articles in newspapers and popular magazines generally lack the context to judge experimental results. They tend to overemphasize the certainty of any result, for instance presenting a new experimental medicine as the "discovery of the cure" of a disease. Also, newspapers and magazines frequently publish articles about scientific results before those results have been peer-reviewed or reproduced by other experimenters. They also tend not to report adequately on the methodology of scientific work, or the degree of experimental error. Thus, popular newspaper and magazine sources are generally not reliable sources for science and medicine articles.
What can a popular-press article on scientific research provide? Often, the most useful thing is the name of the head researcher involved in a project, and the name of his or her institution. For instance, a newspaper article quoting Joe Smith of the Woods Hole Oceanographic Institution regarding whales' response to sonar gives you a strong suggestion of where to go to find more: look up his work on the subject. Rather than citing the newspaper article, cite his published papers.
One method to determine which journals are held in high esteem by scientists is to look at impact factor ratings, which track how many times a given journal is cited by articles in other publications. Be aware, however, that these impact factors are not necessarily valid for all academic fields and specialties.
In general, journals published by prominent scientific societies are of better quality than those produced by commercial publishers. The American Association for the Advancement of Science's journal Science is among the most highly regarded; the journals Nature and Cell are notable non-society publications.
Keep in mind that even a reputable journal may occasionally post a retraction of an experimental result. Articles may be selected on the grounds that they are interesting or highly promising, not merely because they seem reliable.
There are a growing number of sources on the web that publish preprints of articles and conference abstracts, the most popular of these being arXiv. Such websites exercise no editorial control over papers published there. For this reason, arXiv (or similar) preprints and conference abstracts should be considered to be self-published, as they have not been published by a third-party source, and should be treated in the same way as other self-published material. See the section above on self-published sources. Most of them are also primary sources, to be treated with the caution as described in various sections of this guideline.
Researchers may publish on arXiv for different reasons: to establish priority in a competitive field, to make available newly developed methods to the scientific community while the publication is undergoing peer-review (a specially lengthy process in mathematics), and sometimes to publish a paper that has been rejected from several journals or to bypass peer-review for publications of dubious quality. Editors should be aware that preprints in such collections, like those in the arXiv collection, may or may not be accepted by the journal for which they were written — in some cases they are written solely for the arXiv and are never submitted for publication. Similarly, material presented at a conference may not merit publication in a scientific journal.
There are techniques that scientists use to prevent common errors, and to help others replicate results. Some characteristics to look for are experimental control (such as placebo controls), and double-blind methods for medical studies. Detail about the design and implementation of the experiment should be available, as well as raw data. Reliable studies don't just present conclusions.
Responding to "How do you know what the point is? Discuss this in Talk before deleting."
I wish to remove the line that says "Some well known and respected popular science authors include Richard Dawkins and Stephen Jay Gould." Can the author of this sentence please explain the point of including just a couple of names out of potentially hundreds of worthy authors? Colin° Talk 14:27, 13 April 2007 (UTC)
(Apologies for the sarcasm – just a bit of fun).
Nice debate, sound conclusion :-) As an example, Sacks is infamous for his non-standard and sensationalist views on TS (I've heard other physicians opining on his writing). While we're on the topic: I don't understand either of these edits. [1] We need to get back to, Avoid citing the popular press, as they usually get it wrong. [2] The Merck Manual is an utter and total inaccurate wreck when it comes to TS; I hope it's better in other areas. SandyGeorgia ( Talk) 14:33, 13 April 2007 (UTC)
This page is currently a bit too much like an essay than a guideline. That might be fine if we want to keep it an essay, and we may. If not, then the text needs to be condensed and keep to the point. There's a lot more that can be said on this topic and I really welcome other contributors. It might be best to let the text expand a bit before we start refining it back down. That said, if something important got deleted or watered down, then we should bring it back. Colin° Talk 14:44, 13 April 2007 (UTC)
I deleted the claim that broadsheets can be reliable sources of medical information, while tabloids are not. Some tabloids are excellent sources of medical information, and conversely.
Here's a good example from today's New York Daily News, about the automobile accident of New Jersey Governor Jon Corzine, who was not wearing a seat belt.
[3] A difficult recovery that could take months, by Christina Boyle, New York Daily News, April 14th 2007.
This is what teachers call a "teachable moment," an opportunity to teach an important message because the subject has everyone's attention. This story explains exactly how someone is hurt in an auto accident if they're not wearing a seat belt, and it explains Corzine's injuries in meaningful detail. I've read hundreds of accident reports in the engineering literature, and this tabloid news story covers all the essential points. Nbauman 14:12, 14 April 2007 (UTC)
I've removed the line:
This may be true but I've yet to see an example. If they do, surely it is more for certain social, historical or biographical information rather than for medical facts? Some examples would help. I've also removed the line:
Which IMO is not written in a suitable tone.
Finally, I've added a line to clarify where I think there is consensus for using and not using newspapers. Colin° Talk 11:37, 16 April 2007 (UTC)
Nbauman, earlier you said "who are we to tell people not to read them". I think this may be one source of our disagreement. This is not an article to advise people what to read (whether for pleasure or for research for a WP article). It is solely concerned with what we should cite. Certain books, newspapers, magazines and blogs may be a reliable source of medical information. They are not suitable for citation in an encyclopaedia (wrt to medical facts). This page is not read by Wikipedia's readers - it isn't for them. It is for editors. Are you trying to help improve the quality of our sources, or defending popular journalism? Colin° Talk 17:11, 16 April 2007 (UTC)
The section, In science, avoid citing the popular press is complete personal opinion, completely unsourced, completely overgeneralized, and completely wrong. For example:
[5]Annals of Medicine: The Bell Curve; What happens when patients find out how good their doctors really are? by Atul Gawande, The New Yorker, Dec. 6, 2004
Gawande is an MD, and in addition to the New Yorker he writes for the New England Journal of Medicine. Does the author of this section believe that Gawande does not cover science well when he writes for the New Yorker, but does cover science well when he writes for the NEJM? Similarly, Gina Kolata, a PhD, used to write for Science magazine before she moved to the New York Times. I could give many similar examples.
Many articles in the popular press have identified problems in medicine that have been ignored by the peer-reviewed literature, for example defective heart defibrillators, or financial conflicts of interest in the committees that set guidelines and recommend drugs. Peer-reviewed journals often cite newspapers in the footnotes.
The writer of this section does not seem to have consisdered that scientific results are normally released first as presentations at scientific meetings before they are peer-reviewed, and that is where the newspapers and magazines find out about them.
I believe that every article in the popular press should be evaluated on its own merits. You can't replace critical evaluation with a rule of thumb like 'The popular press generally does not cover science well."
This section should be completely rewritten. Nbauman 17:24, 14 April 2007 (UTC)
I've not finished reading this but already spotted something that highlights why newspapers make poor secondary sources. Page two discusses various operations and the variable success rate amongst surgeons. One example given:
The article doesn't fully cite the study, which is typical and understandable (though perhaps not for the online edition, which has no space concerns). So we don't know:
And so on. Most "for-professionals" articles would give more information than this, plus a citation. Without this traceability from Wikipedia to secondary source to primary source, our readers are limited in how much they can learn should they ask questions concerning the reliability of the data and how the author chooses to use it to make their case. The best popular science/medical books provide citations. So should online newspapers. Colin° Talk 23:09, 14 April 2007 (UTC)
Every oncologist might think "Ah, she's citing McArdle and Hole 1991" but almost none of our readers would. I was going to try to look up the paper myself but you've saved me the trouble. Unfortunately, it only goes to strengthen the argument that the primary source should be cited by WP rather than the newspaper (which may be less than helpful in finding it). For example, I can now cite:
which fortunately has the full text available free online. Even from the abstract, our reader can tell more than the newspaper provided. We can also see there were two follow-up letters, one critical and one supportive. PubMed also tells me that McArdle and Hole continued their research. Which is just as well: these were "patients with colorectal cancer presenting over the six years from 1974 to 1979". The quality of an operation performed 30 years ago is of diminishing interest to those going under (or holding) the knife today. They published a follow-up paper in 2002 (with a much larger number of patients, operated over the years 1991 and 1994) that confirmed that variability amongst surgeons was still a problem:
So it could be argued that the first might be a historical classic, but the second is of more relevance today. They have also answered some of my questions about poor patients doing less well:
I see they have refined their conclusions about surgeon variability, by showing that surgeon speciality is a better guide than just volume of work:
I could go on (for example, to check that those who cite this paper do so favourably) but in just a couple of minutes, I've found so much more high quality source material to help improve a WP article on either colorectal cancer or issues of surgeon competence. Colin° Talk 07:00, 15 April 2007 (UTC)
Gawande's work for NEJM is peer-reviewed. His journalism is not. In general, peer-reviewed science is the preferred source for any medical content, with all other sources inferior to it. Within peer-reviewed science, I think we should adopt EBM grading. A meta-analysis or systematic review is much more powerful as a source than individual trials, case-control studies, case series or case reports. JFW | T@lk 20:14, 15 April 2007 (UTC)
Re popular press, now that TV and radio news programs have websites, I see a need for some guidance re citing them. See for example fetus in fetu, where one editor cites ABC and MSNBC news as sources of medical data (incidence, treatment). -- Una Smith 15:20, 9 July 2007 (UTC)
This section needs a paragraph explaining that niche journals have low impact factor regardless of quality, because their readership is small; niche journals can be evaluated (and to some extent compared to "core" journals) based on their average article halflife, meaning the number of years over which an article is cited. -- Una Smith 15:28, 9 July 2007 (UTC)
If at all possible, avoid promoting the idea of rating an article by the author's authority: where they work, their title, their rank. Judge the person by the quality of the work, not vice versa. -- Una Smith 15:28, 9 July 2007 (UTC)
This page was set up as a Proposed Guideline. It would appear that WP only accepts such a proposal for a finite time before marking it Historical or Rejected. Can we decide what we are going to do with it? My gut feeling is that there currently isn't enough traffic or discussion on this for it to move forward quickly enough to become a formal guideline before someone retires it again. If someone else wants to beat a drum to round up some contributors, then great. I was hoping that we might get contributions from editors with experience writing or training reading medical articles. I do believe the project needs these guidelines
But it may be that those needs can be met by taking the banner off and leaving it as an informal guideline in Project space. Thoughts? Colin° Talk 11:35, 15 May 2007 (UTC)
I don't think this guideline is ready to be adopted. Much of it is unsourced, unsupported personal opinion. For example, "The popular press generally does not cover science well." Who says so? What is their evidence? Isn't this an overgeneralization? In some cases, the Wall Street Journal has turned out to be more reliable than the New England Journal of Medicine.
Why doesn't the entry cite the extensive literature on how the popular press covers science (which finds that the quality of coverage varies from very good to very bad)? What about the library literature, such as Magazines for Libraries?
A fundamental problem is that this is just a list of sources and judgments about them. It wouldn't be much help in resolving real disputes that go on in real articles. Define the problem. I would suggest that you look at some actual controversial medical entries, and examine the disputes that come up over reliable sources, etc. Look at Dichloroacetic acid, or Diabetes.
People do cite the popular press all the time. What should we do about it? Most of the peer-reviewed literature isn't available free on the Internet, so it isn't verifiable to someone who doesn't have a subscription. What do we do about that?
What are the disputes and how should they be resolved, in terms of reliable sources? Nbauman 14:26, 15 May 2007 (UTC)
Maybe the proponents could provide some examples of articles which would be removed if this guideline was adopted and why they are inappropriate for inclusion at WP. --
Kevin Murray 14:32, 15 May 2007 (UTC)
I agree 100% that this is not ready to become a guideline right now. That's why I now think our use of "Proposed" may be a little premature. I had thought the label was OK for saying "I propose we have some guidelines on this. Here's a start I've made...". But it is being interpreted as "Here's a set of guidelines I propose. Discuss...". I don't really mind what the banner says (but not "rejected", please) or if we don't have one. It would be nice to have some kind header/intro that said:
I think Nbauman makes some good points, but WP guidelines do not need to be sourced. Opinion is fine if there is consensus.
I'm finding some of the newspaper arguments are starting to repeat so would welcome input from others. Can we please stay focused that these are guidelines for medical facts. When has The Wall Street Journal ever been "more reliable" for medical facts than the NEJM? I'm not talking about breaking some medical scandal a few weeks early. I've just tried searching their online site for medical info and have been unable to find anything other than articles about how some drug approval or loss of patent is affecting some company's share price, or a paragraph on new research opportunities (that affect a company's share price). Colin° Talk 15:21, 15 May 2007 (UTC)
The statement "In some cases, ... has turned out to be more reliable than" just doesn't work. I'm sure you can find cases when it was "more accurate than" for a given topic and moment in time. But "reliable" implies one can regularly, not just occasionally, depend on it. If this was an essay on "Accurate sources" then a survey of the "extensive literature" on different sources and their quality might make an interesting read.
The other aspect we need to consider is "useful". I think the example on the Bell Curve above showed that newspapers aren't as useful a source as a journal, even if the information is technically accurate.
I'll have a look at your other points later to see what I can find. Colin° Talk 15:58, 15 May 2007 (UTC)
Here's an example of using the popular press, which I cleaned up using peer-reviewed or medical sources just last week — from an area I'm familiar with.
Note the headline — gene found !! This is what the popular press does.
When in fact ... The peer reviewed sources are available, and the finding is reported in a way that is more "scientifically" and "medically" correct. The BBC merely parroted some portions of the Duke Medical News, while adding nothing clarifying or illuminating.
{{
cite press release}}
: Check date values in: |date=
(
help)Note the more cautious and accurate headline in the Duke Medical press release, and The researchers estimate that the SLITRK1 mutations account for 5 percent of trichotillomania cases. This gene is not significant in and of itself (it is not *THE* gene that has been discovered as some earth-shattering event), as much as it provides a vehicle for future research directions. There is no need to cite the "hyped" BBC version, when the peer-reviewed article can be found in a medical library (and we don't choose our sources based on whether they are easily available online, free or not - we choose the best and highest-quality sources period, even if they're not available online, which should be the peer-reviewed medical literature).
Further, if the issue of citing the popular press is the only problem with these guidelines, we can work on the wording. But, whenever peer-reviewed sources are available, they could at least be preferred over the popular press, which tends to hype results and take them out of context or proportion. SandyGeorgia ( Talk) 16:12, 15 May 2007 (UTC)
Yes, that's what I'm trying to do. I only object to the oversimplified, unsupported statement that (1) If it appears in the popular press it's not reliable and (2) if it appears in the peer-reviewed literature it is reliable.
My position is that the popular press varies in reliability. I gave you links to a web site run by doctors and journalists, in which doctors review and evaluate the reliability of articles in the popular press. I also cited reference books that librarians use, such as Magazines for Libraries, that evaluate the reliability of popular magazines. Some popular publications and newspapers are more reliable than others.
Peer-reviewed journals are usually more reliable, but not all of them. I liked to a publication, the Brandon-Hill list, that lists the most reliable peer-reviewed medical publications. Some peer-reviewed publications are financed by drug companies, or medical device companies, and publish "peer-reviewed" articles that support the use of their products. So some peer reviwed journals are more reliable than others.
I would suggest that the guidelines include language like that above.
In general, good peer-reviewed publications are more reliable than the newspapers or popular press. But there are lots of exceptions. Even the peer-reviewed journals, like The Lancet, will publish articles that they know are wrong, because they want to get the argument out for debate, as they did with that article on rats who ate genetically modified potatoes. The BMJ (I think) published an article on mercury preservatives in vaccines which they and every legitimate doctor have repudiated, but people keep quoting it. Nbauman 21:26, 16 May 2007 (UTC)
Hello. Are the AHRQ guidelines on grading evidence too technical for inclusion here? I'd really like to see a summary of the grading as outlined in evidence-based medicine included here - I think it's important, no? Just a table like this:
Grade | Evidence | Description |
---|---|---|
A | Ia, Ib | Requires at least one RCT as part of the body of literature of overall good quality and consistency addressing the specific recommendation |
B | IIa, IIb, III | Requires availability of well-conducted clinical studies but no RCTs on the topic of recommendation |
C | IV | Requires evidence from expert committee reports or opinions and/or clinical experience of respected authorities. Indicates absence of directly applicable studies of good quality. |
Would do the trick. Thoughts? Nmg20 17:37, 15 May 2007 (UTC)
I think that would be a good and even necessary addition. You can't write about medicine if you don't understand this. Nbauman 18:17, 15 May 2007 (UTC)
Please add to the page, we can tidy/refine later. There is more than one way of grading sources. The current headings (Periodicals, Books, Online) could perhaps be demoted to 2nd level under a new "Media types" heading (or similar wording). Then we could have another top level heading for "Research types", for example.
We should link to Trish Greenhalgh's "How to Read a Paper" Series. If people read that, they'd have a good idea of different study types. Colin° Talk 22:20, 15 May 2007 (UTC)
I find that it is often easier to access reprints of conference proceedings online than complete journal articles. I'm not sure if that is only true for myself as a veterinarian or if it also applies to M.D.s. Where do these proceedings fall in the matter of reliability, in this project's opinon? Note that these are major conferences, with well-respected lecturers, and I'm only referring to reviews of topics, not new research. Thanks. -- Joelmills 03:12, 25 June 2007 (UTC)
Still easier: don't bother citing anything. (That was a joke.) I cite such sources only as a last recourse, and then only if I know the article actually was presented at the conference. Sometimes the proceedings are published before the conference, and then at the conference the article is retracted! This is more common when the proceeding volume consists of abstracts or short articles that are little more than abstracts. Some proceedings volumes are peer reviewed and/or of the highest quality, but they are in the minority, and you really have to know the specific research community to know which proceedings volumes are top-notch and which ones are not. It isn't enough to go by series title, because this can change from year to year, depending on who is the editor. -- Una Smith 15:40, 9 July 2007 (UTC)
In the spirit of being bold, I've made a few changes to the proposal. Here is a summary diff. Feel free to revert them if they seem redundant or inappropriate. In general, I'd like to be a little more explicit on the fact that primary sources (journal articles reporting original findings) are a welcome and even necessary part of medical articles, but that the interpretation of such research must hew carefully to that provided in reliable secondary sources (reviews/textbooks). I've seen significant issues with editors citing a number of basic-science journal articles and then leaping to a totally off-the-wall conclusion, which is then defended as "cited content".
Another issue is articles on supposed medical conditions which have never been reported or recognized by any medical authority (see mucoid plaque). I would favor including something in the guideline along the lines of, "If a purported medical condition, test, or treatment has been described and evaluated by the medical community, then it should be easy to cite reliable sources on the subject. In the absence of such sources, topics should not be presented as if they are accepted by the medical community." But perhaps this is overstepping the bounds of this proposed guideline. MastCell Talk 17:04, 25 June 2007 (UTC)
Discussion moved from Wikipedia talk:WikiProject Clinical medicine:
WP:MEDRS seems to be at odds with WP:MEDMOS; MEDMOS encourages the use of PubMed references, MEDRS implicitly discourages them.
WP:MEDRS states:
In my opinion:
I look forward to the discussion. My thoughts on this arose from this discussion-- and are related to changes to the McClintock effect article. Nephron T| C 06:12, 24 June 2007 (UTC)
I have been looking at WP:MEDRS, and I would be relieved to have some guidelines such as those that are listed. This would become very relevant to topics in dentistry, such as "new-and-improved" products but especially on fluoride and amalgam. It seems to me that, by far, the most important item in MEDRS is that an article must "present the prevailing medical or scientific consensus." Anything else placed in an article should be labeled as a minority view or one that is not accepted by the established consensus. As long as this principle is followed, then I do not foresee major reliability nor original research problems arising. Secondary sources can be encouraged in the guideline to make certain that medical/scientific consensus is presented, but I think the most important point to emphasize is that (regardless of the source) the content presented in the article, whether held by consensus or a minority viewpoint, must be presented as such. Saying all this, I hope this proposal can eventually be elevated to a guideline with a little work. - Dozenist talk 14:26, 25 June 2007 (UTC)
Have a look at this old version of Green tea. The FDA rejection of health benefits can be found here, which, although in the form of a letter, is the result of a serious review of the available evidence. Look how it is dismissed:
I don't know much about green tea or those studies, but heavyweight studies such as the FDA one should not generally be placed lower in the importance-hierarchy than individual research papers. It is a common misconception on WP that primary is better, probably due to the word's other uses in the English language. MEDRS must not give this impression. Colin° Talk 18:17, 25 June 2007 (UTC)
To my mind, "undergraduate medical textbooks" are absolutely appropriate as sources for medicine-related articles, and are likely to be more appropriate for an online encyclopaedia than postgraduate ones. I've reverted this change in the main article for now, but reckon it merits a discussion on here... Nmg20 ( talk) 16:01, 17 November 2007 (UTC)
I agree with almost everything you've written, and no, undergrad textbooks wouldn't normally be referred to in medical papers. However - at the risk of stating the obvious - we're not trying to produce medical review papers here, we're trying to provide a detailed-but-accessible summary of available medical information.
Undergraduate textbooks have several advantages from this point of view. They're written relatively simply, they rarely include controversial information (which is not to say that we shouldn't include such info, merely that it can be sourced elsewhere), and they are generally excellent summaries of the currently accepted medical / scientific understanding of whatever the subject they deal with is.
I'd also take issue with the idea that textbooks are rarely written by a specialist in that field. To take three textbooks I myself have used and know are commonly recommended: Obstetrics and Gynaecology is by Lawrence Impey, MRCOG and consultant obstetrician at the John Radcliffe. Neuroanatomy is by Crossman (prof in Anatomy at Manchester) and Neary (professor of neurology). Even the crap-sounding Cardiovascular system at a glance is by Aaronson (reader in pharmacology at GKT/KCL), Ward (professor of respiratory cell physiology at GKT/KCL) and Wiener (professor of medicine and physiology at Johns Hopkins). I'm not sure these are exceptions - do other contributors have views here?
I agree, however, that exam-question books are not suitable sources, and nor are cramming books - but I think we would lose out by excluding textbooks. Put it this way - people will still be using newspaper articles in medical articles here, and those are far worse secondary sources than medical textbooks... Nmg20 ( talk) 23:36, 17 November 2007 (UTC)
I seriously doubt that *any* undergraduate textbook covers Tourette syndrome accurately; I would not want to weaken this guildeine to allow their inclusion. SandyGeorgia ( Talk) 16:29, 18 November 2007 (UTC)
In some of my recent article work (e.g. subarachnoid hemorrhage, Wilson's disease, ascending cholangitis) I have found that even the best recent clinical reviews are still short of information, especially on the softer areas like quality of life and prognosis. I find myself reaching for primary sources to complement the main reviews, but I remain concerned that we are opening ourselves up to WP:SYNTH. Are there any views on this? JFW | T@lk 15:10, 18 June 2008 (UTC)
My point is that reviews sometimes don't cover the points that you'd really want to address. For instance, on SAH I wanted to mention the fact that many people with previous SAH have persistent headaches. The only evidence for this could be found in a primary research study that definitely addressed the quesiton, but is of inferior strength on our "hierarchy" of sources. JFW | T@lk 10:13, 19 June 2008 (UTC)
(Please skip down to the next sub-section for another succinct introduction.)
I think there's a ubiquitous misunderstanding of the word secondary reflected on this article. A primary study generally reviews and discusses their findings in light of prior evidence. This makes it a secondary source for information on these articles. I've noted this with a RfC at WT:NOR here, and also over at Talk:Coeliac disease#Misunderstanding of secondary in the context of primary studies and reviews. The question is: is a reviewer necessarily more credible to comment on the prior science than a researcher discussing a primary study, all else equal? I don't think so -- although there may be a small bias, I don't think the reviewer should be considered immune to these biases. Now, systematic reviews help to eliminate bias by forcing the reviewer to be precise and evaluate all studies -- but these are uncommon, and still susceptible to bias. In summary, more importance should probably be placed on the date of the publication and the comprehensiveness with which it approaches a topic. Very broad reviews are likely to miss important details which specialized papers will discuss. ImpIn | ( t - c) 06:44, 28 June 2008 (UTC)
Secondary sources are preferable to primary ones, even if we're talking about the "previous work" area of primary sources. As a practical matter, when a secondary source is reviewed, its reviewers check more carefully that it's comprehensive, neutral, etc. For a primary source, reviewers concentrate on the new results being reported, and tend to treat the previous-work section less carefully. Generally speaking, primary sources try to make a point and to advance research in a particular area, and are more prone to list previous work that agrees with them, and are less prone to list other sources that disagree; whereas secondary sources are trying to cover a topic more generally and fairly, and are a much better way to achieve NPOV. Of course, this is just a tendency, and one can find bad secondary sources and good primary ones; but it is a strong tendency and should not be ignored. Eubulides ( talk) 10:36, 30 June 2008 (UTC)
Generally speaking, primary sources try to make a point and to advance research in a particular area, and are more prone to list previous work that agrees with them, and are less prone to list other sources that disagree; whereas secondary sources are trying to cover a topic more generally and fairly, and are a much better way to achieve NPOV.
I think you're assuming bad faith. There are scientific facts which are being suppressed for a pedantic reason i.e that they are not cited in a review. The fact is that a review focused upon casein in wheat will probably never happen -- thus the likely sensitivity of coeliac patients to casein will never be mentioned. The strong finding of budenoside is similar; that study may not be replicated for another few years. The longstanding (7 years?) misunderstanding of the word secondary is not a minor issue. Nor is it minor that you seem to place a greater emphasis on a study's mention in "a high-quality review" (based on a wiki editor's opinion) than on replications. You're fine with a lot of behind the scenes editorial work in interpreting reviews which are high-quality, but when citing scientific facts stated in plain language, you seem to think it is verboten. I don't think that makes sense. The former seems more questionable to me than the latter. Interesting findings should be reported, and similar studies can be reported alongside. That's not SYNTH, that's just pointing to studies. For example, at least 4 studies have been done showing Se reducing the toxicity of MeHg (methylmercury), with 1 exception. None of these are all cited in one review; they are cited in different reviews, and the most recent (2007) in perhaps none. I think stating that "Several studies have found that Se reduces the toxicity of MeHg in rats, with an exception"[footnotes] makes sense. You apparently do not.
Also, your wording makes it unclear: when exactly could I cite a study such as the casein one? Should it be replicated once? After there's a systematic review on casein and coeliac patients? After that one study is mentioned in a review? How about the budenoside study? There is nothing in MEDRS which says you cannot cite primary studies, especially remarkable ones like these. You're actually pushing for a policy which does not exist. The current policy even says that popular press articles can sometimes be cited, and here you're fighting tooth and tail against the addition of remarkable primary studies.
Note: I stand by my censorship comment; whether the censorship is intentional or not, it amounts to censorship. In case you haven't noticed, I'm not at Wikipedia to win popularity contests. You're appealing to your own fictitious policy to keep interesting, encyclopedia worthy-content out of the encyclopedia. Further, this impels other people to do the same, and allows people to justify censorship ( recent example). Wikipedia is not conservativopedia; if Einstein had published his paper on Relativity today, we would not want to "wait until it is verified" to note it. There's no reason for that position. There's no rule that a study has to be replicated "or noted in a high-quality review" before it gets noted on Wikipedia. Sure, reviews get greater weight, but when they aren't available, individual studies are citable. Somehow MEDRS even allows for popular press and press releases, as well. II | ( t - c) 14:26, 30 June 2008 (UTC)
Here's why: it's trivially easy for anyone with slight sophistication to mine the primary "reliable" medical literature to advance whatever editorial point they like. I'm thinking of creating an article claiming that HIV cannot possibly be the cause of AIDS, sourced entirely to "reliable", Pubmed-indexed, peer-reviewed publications. It's easy with selective citation, and the only real defense is common sense - an editor's selection and presentation of primary medical studies should never contradict, supercede, or ignore syntheses by reliable third-party sources. MastCell Talk 19:13, 1 July 2008 (UTC)
Colin has admitted, as is obvious, that the "mini-reviews" in primary articles are secondary sources. To be precise in our language, secondary should not be used as a synonym for reviews. Here is what I attempted to add ( diff 1, diff 2). The final text looks as follows: (please read slowly and specifically point towards problems in the addition, and evidence supporting your conclusions if possible)
A secondary source in medicine summarizes one or more primary or secondary sources, usually to give an overview of the current understanding of a medical topic. Review articles and specialist textbooks are examples of secondary sources. A good secondary source from a reputable publisher will be written by an expert in the field and be editorially or peer reviewed. Journalists writing in the popular press, and marketing departments who issue press releases, tend to write poorer secondary source material; however, such material may be appropriate for inclusion in some contexts. (Begin addition) Primary research articles can also be secondary sources of prior literature, and are superior to the popular press in this respect. The best secondary sources are systematic reviews, which look at all available evidence on a particular topic and justify the inclusion and exclusion of evidence. After systematic reviews, preference should be given to the most up-to-date reviews and if necessary primary articles which discuss the largest range of evidence on a particular subject in the most non-technical, analytical manner.
(My addition in red.)
Eubulides reverted this. He stated that he disagreed that primary articles can be secondary sources. However, that primary articles are secondary sources is a fact, just like gravity is a fact. I pointed out that the popular press is a citable secondary source and requested that he explain how the citation and discussion of a primary study is not. He has not answered this question. My edit ranks the sources as follows: 1) systematic reviews, 2) reviews, 3) discussion in primary articles, 4) popular press. We can address the particular issue of citing primary articles separately. Let's discuss the issues with this edit right now. II | ( t - c) 23:35, 30 June 2008 (UTC)
{{
cite journal}}
: Unknown parameter |laydate=
ignored (
help); Unknown parameter |laysource=
ignored (
help); Unknown parameter |laysummary=
ignored (
help)CS1 maint: multiple names: authors list (
link)Going down the list (I was going to just interject on each point, but thought you might take offense -- let me know if you want to try that):
With the above discussion in mind, would the following change make sense? In WP:MEDRS #Article type, change from:
to:
Eubulides ( talk) 20:36, 1 July 2008 (UTC)
I don't understand how/why the introduction/discussion sections of primary articles are categorically "much less reliable" than reviews. In general, what I've read lately suggests the exact opposite. Categorical statements to this effect are misleading. Now, we can all point to examples. I can show you 3-4 poor review articles right now, out of the 5-6 that I've read lately. Many reviews, unfortunately, seem to describe primary articles briefly rather than analyzing them. The reality is that reliability is, as it should be, more connected to the author than the type of publication. This can be assessed by looking at how many papers the author has published on the topic. "Review", "primary article" -- these are simply labels. An actual example: Let's say you've got a 2006 review published by one author with 11 (second author 11) papers which discusses, among other things (as reviews frequently do) Quality of Life (QOL) in coeliac patients. It simply describes a few previous studies, noting generally that studies suggest that women suffer more after diagnosis. In 2007, a primary article on Quality of Life whose main author has published over 200 papers (the second 121, the third 303, the fourth 839), many of them on coeliac disease and several on QOL specifically. These are the premier experts of the field. He discusses the issue in detail in his paper, citing more QOL papers than the review does. He differs (refutes?) with the review, noting that women in western countries "report a lower HRQOL measured by the SF-36 than men". Is he more reliable, as an expert on QOL and coeliac disease? Why would he not be?
In general, you may have a better chance of hearing from the real experts, and hearing their in-depth analysis, in the discussion sections of their papers. The subpar "experts" may be more likely to publish reviews than to do "primary research". And these reviews cover such a wide range that often they just describe them in simple sentences, which offers no value over the article's own abstract. II | ( t - c) 04:30, 2 July 2008 (UTC)
[Outdent]If you cut the "far", then I'd be more inclined to support it. Also, it could be worded better: "Research papers are primary souces, although they are secondary sources in their discussion of prior literature. In this respect they are typically less reliable than reviews because they cover less sources (?)." However, that section is not the right section to be discussing the reliability of different article types. Why don't we have a section focused upon reliability of different article types? Also, that section (/Wikipedia:WikiProject_Medicine/Reliable_sources#Article_type) has a factual error: reviews are more likely to contain original research than systematic reviews. Systematic reviews are highly unlikely to contain original research. Reviews can be variable -- some take a bunch of articles and come to a novel conclusion based upon that literature. Systematic reviews simply analyze the rigorousness overall conclusion of studies on a narrow topic. This error should be fixed.
In fact, this entire article is rather redundant and scattered, and could use some serious copyediting. I'll do some after we resolve this, and we can deal with it per BRD. II | ( t - c) 06:59, 2 July 2008 (UTC)
Using a PubMed count to establish relative reliability is a phenomenally bad idea. It's like using the number of books an author has published to establish how good of a writer they are. David Reardon publishes far more on abortion and mental health than nearly anyone else, but his findings are minoritarian if not discredited in the field. If there is really a head-to-head battle between the findings of two primary articles, then the "referee" should be found in summaries and syntheses of evidence by expert panels, major professional groups, or in review articles published in reputable, high-impact journals. It's not complicated unless we make it so. MastCell Talk 19:30, 2 July 2008 (UTC)
http://jama.ama-assn.org/cgi/content/full/300/1/98 - an excellent set of instructions for people wanting to submit letters to JAMA. I think numerous points in that article are readily applicable to this policy. JFW | T@lk 08:29, 2 July 2008 (UTC)
I suggested to Eubulides that he look at what the academic community says about reviews. I've found some studies. PMID 1834807 (1991) discusses a system used to rate reviews. This could be of significant use for us, as we need to evaluate reviews. It would be interesting to see where this has gone. Related links in PubMed has a vast amount of related articles. PMID 9496383 (1997) finds that most reviews are hardly systematic (this is a bad thing). PMID 10610646 (1999 - free access) finds the same thing. PMID 17606172 (2007) focuses on meta-analysis, but finds improvement. PMID 16277721 (2005) says meta-analyses are generally poor. PMC 1602036 (2006) evaluates Cochrane reviews vrs industry reviews -- obviously, industry reviews are worse. PMID 9092319 (1997) is a guide for finding systematic reviews. PMC 2379630 (1993) specifically compares OR and reviews. It notes that the answers provided by broad reviews should not be accepted uncritically as valid. Conclusion: Certainly, as my original edit to MEDRS reflects, reviews should generally get priority over primary articles -- but people need to recognize the difference in reviews. Most reviews I've seen are not systematic. Here is an example of an overly broad review. These reviews are less reliable than OR in many cases, since they are often both written by an outsider and give cursory attention to many complex issues. Eubulides has argued that systematic reviews should not get priority; this is directly contradicted by the scientists, and does not make good sense. As I stated earlier, it should go: 1) systematic reviews; 2) good, preferably quasi-systematic -- ie a review which states its methods for including literature 3); OR/broad reviews; 4) popular press/press releases. II | ( t - c) 10:08, 2 July 2008 (UTC)
JFW: you seem to conflate meta-analysis and systematic review. The lead to systematic reviews not misleading, although it is not sourced. Systematic reviews are considered the top in quality in medical science. You're writing contrary to much evidence presented (above). All of them say that "reviews need to be systematic". Otherwise you're at the mercy of prejudices -- you don't know how much they've just grabbed what they want you to hear. Asking specific questions is a good thing in a review, because you can't cover a ton of questions well -- they is just too many studies. As far as your assumptions of bad faith -- well, they are what they are: uncivil assumptions of bad faith. II | ( t - c) 23:34, 2 July 2008 (UTC)
Eubulides: It backs up my assertion that overly broad, non-analytical, and non-systematic (low-quality) reviews are little better than original research. Reviews should be ranked. There are specific, concrete criteria for evaluating reviews, which that paper lists. When a review fails those criteria, it is not much better than an original research article. I suggest that we incorporate the basic review assessments that that article proposes into MEDRS: 1) Is the question clearly defined? 2) Does the review focus on a specific question? 3) Is the author obviously biased? 4) Are the methods used to gather articles described? 5) Are references scanty? 6) Are the primary studies critically appraised? 7) Are there research design and population described? These are a good start in telling people how to analyze reviews. It's not enough to simply say "use high-quality reviews". Distinguishing between high-quality and low-quality reviews is possible, and should be done. I believe low-quality reviews are typically little better than OR, but obviously there's wide variations. II | ( t - c) 02:08, 3 July 2008 (UTC)
{{
cite journal}}
: CS1 maint: multiple names: authors list (
link)The Cochrane Collaboration has various set of criteria for evaluating studies. Those criteria may be useful models here. Those criteria do not include impact factor of the journal in which the study appears. Setting criteria is a very active area of work, as the best choice of criteria is an open problem. Try a Google search for "site:cochrane.org criteria" to find a slew of conference abstracts. -- Una Smith ( talk) 15:19, 4 July 2008 (UTC)
I'd like to venture a comment here, even though I'm coming in late on the conversation. it seems to me that the point behind this 3rd-hand, secondary source prescription is that we want to make sure that the views we use reflect a general consensus within a significant portion of a field. reviews as a rule are neither brilliant nor innovative, and it's precisely those lacks that make them useful in WP - they usually reflect a nice run-of-the-mill consensus in the discipline. the 'literature' sections of primary research, by contrast, may (and often do) include recent, innovative, primary research material by other authors doing similar work. there's no question that primary research authors cherry-pick their sources for the purposes of support, criticism, or relevance to their own work. primary researchers are trying to manufacture or influence the current understanding in their field - that's what research is for - and so it's not at all clear to me that primary research will make clear distinctions between the actual current understandings in the field and the author's personal perceptions of what the field should understand.
if Wikipedia has to wait seven years for a result to become fully accepted by the medical community, then Wikipedia should wait seven years. best not to get ahead of scientific consensus... -- Ludwigs2 09:39, 7 July 2008 (UTC)
Is there any way to give a rule of thumb for how recent sources should be? How old is too old? Are sources from the mid or early '90s OK? I guess this would vary based on the subject and how fast it's developing. Any advice about how to gauge this? delldot talk 15:29, 10 July 2008 (UTC)
Here are some rules of thumb for keeping an article up-to-date while maintaining the more-important goal of reliability. These guidelines are appropriate for actively-researched areas with many primary sources and several reviews, and may need to be relaxed in areas where little progress is being made and few reviews are being published.
These are just rules of thumb. There are exceptions:
{{
cite journal}}
: CS1 maint: multiple names: authors list (
link)(end of draft) Eubulides ( talk) 17:08, 10 July 2008 (UTC)
I have some significant concerns about this, perhaps mostly because of the way "rules of thumb" turn into sweeping, iron-clad requirements after a few months, and perhaps because I get the feeling that none of you have any connections to people who write reviews and therefore put too much faith in them.
Sure, if you're only working on articles about congestive heart failure and colon cancer and other common conditions, then the concepts here make a great starting point. However, this isn't going to work at all for very rare diseases, where a well-written case study from twenty years ago may actually be your most reliable source. Consider ODDD. I know: you've never heard of it. But go search for oculodentodigital at pubmed.gov, and limit your search to the last five years. You'll get thirty-five (35) papers. The only "review" on the disease in the last five years (as opposed to the genetics and physiology that underlie the disease) is actually a case study involving three patients. It's dated 2004. I don't expect a better review to appear in 2009, or even by 2014. My expectation is based on the fact that there have apparently never been any proper reviews published for this condition. And what is the first thing the editor reads here? "Do not cite primary sources" -- the only sources that exist for this disease.
This also isn't going to work well for many aspects of uncommon diseases. For example: consider some third-string treatment for an uncommon cancer. You've got a twenty-year old paper that gives you a success rate. It's the only randomized controlled study ever done using the specific treatment in this specific cancer. The recent review cites this paper and summarizes the conclusions in two words: "poor prognosis." According to this, the actual survival rate is suddenly not important, because the study was done before the review, and the review doesn't re-report the actual numbers. Is that what you really want? To put an expiration date on data?
I also think that citing any study that is mentioned favorably in recent reviews should be acceptable. For one thing, we get more detailed articles that way. For another, if the original article is retracted, then we know what we need to change. A review that cites Hwang Woo-Suk favorably is not going to be retracted just because the world later discovered that this Korean scientist fabricated much of his stem cell research.
As ImpIn points out, this scheme works poorly in cases where the recent reviews only cover certain aspects of a disease. I frequently see very good reviews in terms of treatment, and that also completely neglect epidemiology. It's hard to find epidemiological information for less developed countries. Sometimes the best we can do is a rather old paper. The fact that an American or European author skips over the prevalence of a disease in Africa or South Asia doesn't mean that this kind of information unimportant for our worldwide encyclopedia: it means that the review is incomplete. In very common diseases, nearly all of the reviews are deliberately incomplete: you'd write a review on a specific aspect or sub-type of hypertension, because otherwise your review would be the length of a book. I won't say that the authors are necessarily biased because of this -- but reviews cannot be assumed to be complete.
Finally, this advice is completely wrong for history sections, for what ought to be perfectly obvious reasons.
Yes, I know: you only meant this to apply to certain "actively-researched areas with hundreds of primary sources and dozens of reviews". But it's not actually that obvious to those who don't already know what you intended to accomplish. The first thing the editor reads is "Do not cite primary sources." As written, I don't think that this communicates what I think you want to say.
I don't mind stating a general preference for recent reviews, although I still prize editor judgement and a good final product over mindless compliance with rules. I could probably support a system of rules like this if it were clearly stated that this guidance only applies to the sections of an article that deal with current practice in diseases where proper reviews are readily available. I might also add that primary papers aren't bad in themselves, so long as they don't actively contradict all of the recent reviews. Fundamentally, I think that if we're going to publish this, then the caveats and restrictions need to go first, not last, and they need to be stated more strongly than the guidance. For example, "Do not cite primary sources..." should be "Consider citing a recent, comprehensive review in a reputable journal instead of older primary sources." The section might begin with the sentence about this advice only applying to articles on actively-researched areas with hundreds of primary sources and dozens of reviews, although the general principles might be applicable in some less common diseases. WhatamIdoing ( talk) 02:52, 11 July 2008 (UTC)
I'm glad of WhatamIdoing's comments and the changes made. I think we can sometimes concentrate too much on the big diseases that attract controversy and edits from POV pushers. Wrt citing primary sources for studies you wish to comment on, I have found it useful to use the following style:
In effect, the primary source is being used purely to show the study took place and to act as a footnote for the reader should they wish to read the primary material about the study. The secondary source is used to back up the conclusions of the study. My preference is to restrict the explicit mention of studies (the History section is one obvious example) since if the results of the study are now accepted widely, then they can just be stated as facts. Colin° Talk 11:22, 11 July 2008 (UTC)
I agree with citing both in many cases, and I said exactly this above: "It might be best to cite the paper and the review, or something, but it is misleading to cite a review for a statement which is actually just being repeated [from a primary study]." I know that the APA parenethical citation style encourages you to cite the original source being cited whenever possible, and I imagine a similar practice is at least somewhat encouraged in footnote referencing, because it's much better for the readers to know the original source of an assertion. As my quote shows, I also agree with Steve -- in less controversial articles, citing key primary articles is the appropriate way to go unless the review is doing some critical appraisal, synthesizing several studies -- and often reviews are not doing critical appraisal, but rather just listing reviews. II | ( t - c) 23:33, 11 July 2008 (UTC)
WhatamIdoing, "the primary reason that reviewers don't list every single study, with a rationale for including or excluding it, is space constraint. You would spend pages and pages just listing articles for celiac disease." Couple points: good reviews should be focused for this reason, and it is not that hard to cover all the articles once you've made your question specific, because one can group them like "Several studies found such and such"(1-6). The primary reason that most studies do not list all articles is 1) poor research, 2) overly broad focus (look for a more specific reiew), or 3) bias. If you want examples, I've seen plenty. Look at my section above. There are also academic articles which state that reviews with a specific questions are preferred, and the Cochrane reviews follow this guideline as well (browse through them). II | ( t - c) 01:20, 13 July 2008 (UTC)
I think the debate over review quality has been done to death and there is very little us, as Wikipedian's, can do about improving the literature. We use the best sources we can. Discussions over whether this or that review is biased should be taken to the relevant article's talk page. II, I would take your lecture on what makes a good review, and how we can identify bias more seriously were it not for this diff proudly displayed on your user page. I particularly enjoyed the "Other studies have found that coconut oil can help in weight loss and poison recovery." statement and sourcing. Colin° Talk 21:01, 13 July 2008 (UTC)
Discussion on the draft itself seems to have died down, so I added it, except that I omitted the detailed example of citing a Cochrane review, which on rereading didn't seem to be worth all that space on the project page. If someone else thinks that example is worth while please feel free to add it of course. Eubulides ( talk) 18:03, 14 July 2008 (UTC)
This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | Archive 2 | Archive 3 | → | Archive 5 |
I'm curious as to the intent of this page; do you seek to create an exhaustive list of magazines and books to use as resources? Or is this page a group of examples, in which case I wonder why it is needed in addition to the normal pages like WP:RS? ( Radiant) 13:39, 14 November 2006 (UTC)
WP:RS was substantially rewritten on 1st December 2006 ( diff). The following text, which had existed for a while and which we had borrowed for these guidelines, was lost:
Scientific journals are the best place to find primary source articles about experiments, including medical studies. Any serious scientific journal is peer-reviewed. Many articles are excluded from peer-reviewed journals because they report what is in the opinion of the editors unimportant or questionable research. In particular be careful of material in a journal that is not peer-reviewed reporting material in a different field. (See the Marty Rimm and Sokal affairs.)
The fact that a statement is published in a refereed journal does not make it true. Even a well-designed experiment or study can produce flawed results or fall victim to deliberate fraud. (See the Retracted article on neurotoxicity of ecstasy and the Schön affair.)
Honesty and the policies of neutrality and No original research demand that we present the prevailing " scientific consensus". Polling a group of experts in the field wouldn't be practical for many editors but fortunately there is an easier way. The scientific consensus can be found in recent, authoritative review articles or textbooks and some forms of monographs.
There is sometimes no single prevailing view because the available evidence does not yet point to a single answer. Because Wikipedia not only aims to be accurate, but also useful, it tries to explain the theories and empirical justification for each school of thought, with reference to published sources. Editors must not, however, create arguments themselves in favor of, or against, any particular theory or position. See Wikipedia:No original research, which is policy. Although significant-minority views are welcome in Wikipedia, the views of tiny minorities need not be reported. (See Wikipedia:Neutral Point of View.)
Make readers aware of any uncertainty or controversy. A well-referenced article will point to specific journal articles or specific theories proposed by specific researchers.
The popular press generally does not cover science well. Articles in newspapers and popular magazines generally lack the context to judge experimental results. They tend to overemphasize the certainty of any result, for instance presenting a new experimental medicine as the "discovery of the cure" of a disease. Also, newspapers and magazines frequently publish articles about scientific results before those results have been peer-reviewed or reproduced by other experimenters. They also tend not to report adequately on the methodology of scientific work, or the degree of experimental error. Thus, popular newspaper and magazine sources are generally not reliable sources for science and medicine articles.
What can a popular-press article on scientific research provide? Often, the most useful thing is the name of the head researcher involved in a project, and the name of his or her institution. For instance, a newspaper article quoting Joe Smith of the Woods Hole Oceanographic Institution regarding whales' response to sonar gives you a strong suggestion of where to go to find more: look up his work on the subject. Rather than citing the newspaper article, cite his published papers.
One method to determine which journals are held in high esteem by scientists is to look at impact factor ratings, which track how many times a given journal is cited by articles in other publications. Be aware, however, that these impact factors are not necessarily valid for all academic fields and specialties.
In general, journals published by prominent scientific societies are of better quality than those produced by commercial publishers. The American Association for the Advancement of Science's journal Science is among the most highly regarded; the journals Nature and Cell are notable non-society publications.
Keep in mind that even a reputable journal may occasionally post a retraction of an experimental result. Articles may be selected on the grounds that they are interesting or highly promising, not merely because they seem reliable.
There are a growing number of sources on the web that publish preprints of articles and conference abstracts, the most popular of these being arXiv. Such websites exercise no editorial control over papers published there. For this reason, arXiv (or similar) preprints and conference abstracts should be considered to be self-published, as they have not been published by a third-party source, and should be treated in the same way as other self-published material. See the section above on self-published sources. Most of them are also primary sources, to be treated with the caution as described in various sections of this guideline.
Researchers may publish on arXiv for different reasons: to establish priority in a competitive field, to make available newly developed methods to the scientific community while the publication is undergoing peer-review (a specially lengthy process in mathematics), and sometimes to publish a paper that has been rejected from several journals or to bypass peer-review for publications of dubious quality. Editors should be aware that preprints in such collections, like those in the arXiv collection, may or may not be accepted by the journal for which they were written — in some cases they are written solely for the arXiv and are never submitted for publication. Similarly, material presented at a conference may not merit publication in a scientific journal.
There are techniques that scientists use to prevent common errors, and to help others replicate results. Some characteristics to look for are experimental control (such as placebo controls), and double-blind methods for medical studies. Detail about the design and implementation of the experiment should be available, as well as raw data. Reliable studies don't just present conclusions.
Responding to "How do you know what the point is? Discuss this in Talk before deleting."
I wish to remove the line that says "Some well known and respected popular science authors include Richard Dawkins and Stephen Jay Gould." Can the author of this sentence please explain the point of including just a couple of names out of potentially hundreds of worthy authors? Colin° Talk 14:27, 13 April 2007 (UTC)
(Apologies for the sarcasm – just a bit of fun).
Nice debate, sound conclusion :-) As an example, Sacks is infamous for his non-standard and sensationalist views on TS (I've heard other physicians opining on his writing). While we're on the topic: I don't understand either of these edits. [1] We need to get back to, Avoid citing the popular press, as they usually get it wrong. [2] The Merck Manual is an utter and total inaccurate wreck when it comes to TS; I hope it's better in other areas. SandyGeorgia ( Talk) 14:33, 13 April 2007 (UTC)
This page is currently a bit too much like an essay than a guideline. That might be fine if we want to keep it an essay, and we may. If not, then the text needs to be condensed and keep to the point. There's a lot more that can be said on this topic and I really welcome other contributors. It might be best to let the text expand a bit before we start refining it back down. That said, if something important got deleted or watered down, then we should bring it back. Colin° Talk 14:44, 13 April 2007 (UTC)
I deleted the claim that broadsheets can be reliable sources of medical information, while tabloids are not. Some tabloids are excellent sources of medical information, and conversely.
Here's a good example from today's New York Daily News, about the automobile accident of New Jersey Governor Jon Corzine, who was not wearing a seat belt.
[3] A difficult recovery that could take months, by Christina Boyle, New York Daily News, April 14th 2007.
This is what teachers call a "teachable moment," an opportunity to teach an important message because the subject has everyone's attention. This story explains exactly how someone is hurt in an auto accident if they're not wearing a seat belt, and it explains Corzine's injuries in meaningful detail. I've read hundreds of accident reports in the engineering literature, and this tabloid news story covers all the essential points. Nbauman 14:12, 14 April 2007 (UTC)
I've removed the line:
This may be true but I've yet to see an example. If they do, surely it is more for certain social, historical or biographical information rather than for medical facts? Some examples would help. I've also removed the line:
Which IMO is not written in a suitable tone.
Finally, I've added a line to clarify where I think there is consensus for using and not using newspapers. Colin° Talk 11:37, 16 April 2007 (UTC)
Nbauman, earlier you said "who are we to tell people not to read them". I think this may be one source of our disagreement. This is not an article to advise people what to read (whether for pleasure or for research for a WP article). It is solely concerned with what we should cite. Certain books, newspapers, magazines and blogs may be a reliable source of medical information. They are not suitable for citation in an encyclopaedia (wrt to medical facts). This page is not read by Wikipedia's readers - it isn't for them. It is for editors. Are you trying to help improve the quality of our sources, or defending popular journalism? Colin° Talk 17:11, 16 April 2007 (UTC)
The section, In science, avoid citing the popular press is complete personal opinion, completely unsourced, completely overgeneralized, and completely wrong. For example:
[5]Annals of Medicine: The Bell Curve; What happens when patients find out how good their doctors really are? by Atul Gawande, The New Yorker, Dec. 6, 2004
Gawande is an MD, and in addition to the New Yorker he writes for the New England Journal of Medicine. Does the author of this section believe that Gawande does not cover science well when he writes for the New Yorker, but does cover science well when he writes for the NEJM? Similarly, Gina Kolata, a PhD, used to write for Science magazine before she moved to the New York Times. I could give many similar examples.
Many articles in the popular press have identified problems in medicine that have been ignored by the peer-reviewed literature, for example defective heart defibrillators, or financial conflicts of interest in the committees that set guidelines and recommend drugs. Peer-reviewed journals often cite newspapers in the footnotes.
The writer of this section does not seem to have consisdered that scientific results are normally released first as presentations at scientific meetings before they are peer-reviewed, and that is where the newspapers and magazines find out about them.
I believe that every article in the popular press should be evaluated on its own merits. You can't replace critical evaluation with a rule of thumb like 'The popular press generally does not cover science well."
This section should be completely rewritten. Nbauman 17:24, 14 April 2007 (UTC)
I've not finished reading this but already spotted something that highlights why newspapers make poor secondary sources. Page two discusses various operations and the variable success rate amongst surgeons. One example given:
The article doesn't fully cite the study, which is typical and understandable (though perhaps not for the online edition, which has no space concerns). So we don't know:
And so on. Most "for-professionals" articles would give more information than this, plus a citation. Without this traceability from Wikipedia to secondary source to primary source, our readers are limited in how much they can learn should they ask questions concerning the reliability of the data and how the author chooses to use it to make their case. The best popular science/medical books provide citations. So should online newspapers. Colin° Talk 23:09, 14 April 2007 (UTC)
Every oncologist might think "Ah, she's citing McArdle and Hole 1991" but almost none of our readers would. I was going to try to look up the paper myself but you've saved me the trouble. Unfortunately, it only goes to strengthen the argument that the primary source should be cited by WP rather than the newspaper (which may be less than helpful in finding it). For example, I can now cite:
which fortunately has the full text available free online. Even from the abstract, our reader can tell more than the newspaper provided. We can also see there were two follow-up letters, one critical and one supportive. PubMed also tells me that McArdle and Hole continued their research. Which is just as well: these were "patients with colorectal cancer presenting over the six years from 1974 to 1979". The quality of an operation performed 30 years ago is of diminishing interest to those going under (or holding) the knife today. They published a follow-up paper in 2002 (with a much larger number of patients, operated over the years 1991 and 1994) that confirmed that variability amongst surgeons was still a problem:
So it could be argued that the first might be a historical classic, but the second is of more relevance today. They have also answered some of my questions about poor patients doing less well:
I see they have refined their conclusions about surgeon variability, by showing that surgeon speciality is a better guide than just volume of work:
I could go on (for example, to check that those who cite this paper do so favourably) but in just a couple of minutes, I've found so much more high quality source material to help improve a WP article on either colorectal cancer or issues of surgeon competence. Colin° Talk 07:00, 15 April 2007 (UTC)
Gawande's work for NEJM is peer-reviewed. His journalism is not. In general, peer-reviewed science is the preferred source for any medical content, with all other sources inferior to it. Within peer-reviewed science, I think we should adopt EBM grading. A meta-analysis or systematic review is much more powerful as a source than individual trials, case-control studies, case series or case reports. JFW | T@lk 20:14, 15 April 2007 (UTC)
Re popular press, now that TV and radio news programs have websites, I see a need for some guidance re citing them. See for example fetus in fetu, where one editor cites ABC and MSNBC news as sources of medical data (incidence, treatment). -- Una Smith 15:20, 9 July 2007 (UTC)
This section needs a paragraph explaining that niche journals have low impact factor regardless of quality, because their readership is small; niche journals can be evaluated (and to some extent compared to "core" journals) based on their average article halflife, meaning the number of years over which an article is cited. -- Una Smith 15:28, 9 July 2007 (UTC)
If at all possible, avoid promoting the idea of rating an article by the author's authority: where they work, their title, their rank. Judge the person by the quality of the work, not vice versa. -- Una Smith 15:28, 9 July 2007 (UTC)
This page was set up as a Proposed Guideline. It would appear that WP only accepts such a proposal for a finite time before marking it Historical or Rejected. Can we decide what we are going to do with it? My gut feeling is that there currently isn't enough traffic or discussion on this for it to move forward quickly enough to become a formal guideline before someone retires it again. If someone else wants to beat a drum to round up some contributors, then great. I was hoping that we might get contributions from editors with experience writing or training reading medical articles. I do believe the project needs these guidelines
But it may be that those needs can be met by taking the banner off and leaving it as an informal guideline in Project space. Thoughts? Colin° Talk 11:35, 15 May 2007 (UTC)
I don't think this guideline is ready to be adopted. Much of it is unsourced, unsupported personal opinion. For example, "The popular press generally does not cover science well." Who says so? What is their evidence? Isn't this an overgeneralization? In some cases, the Wall Street Journal has turned out to be more reliable than the New England Journal of Medicine.
Why doesn't the entry cite the extensive literature on how the popular press covers science (which finds that the quality of coverage varies from very good to very bad)? What about the library literature, such as Magazines for Libraries?
A fundamental problem is that this is just a list of sources and judgments about them. It wouldn't be much help in resolving real disputes that go on in real articles. Define the problem. I would suggest that you look at some actual controversial medical entries, and examine the disputes that come up over reliable sources, etc. Look at Dichloroacetic acid, or Diabetes.
People do cite the popular press all the time. What should we do about it? Most of the peer-reviewed literature isn't available free on the Internet, so it isn't verifiable to someone who doesn't have a subscription. What do we do about that?
What are the disputes and how should they be resolved, in terms of reliable sources? Nbauman 14:26, 15 May 2007 (UTC)
Maybe the proponents could provide some examples of articles which would be removed if this guideline was adopted and why they are inappropriate for inclusion at WP. --
Kevin Murray 14:32, 15 May 2007 (UTC)
I agree 100% that this is not ready to become a guideline right now. That's why I now think our use of "Proposed" may be a little premature. I had thought the label was OK for saying "I propose we have some guidelines on this. Here's a start I've made...". But it is being interpreted as "Here's a set of guidelines I propose. Discuss...". I don't really mind what the banner says (but not "rejected", please) or if we don't have one. It would be nice to have some kind header/intro that said:
I think Nbauman makes some good points, but WP guidelines do not need to be sourced. Opinion is fine if there is consensus.
I'm finding some of the newspaper arguments are starting to repeat so would welcome input from others. Can we please stay focused that these are guidelines for medical facts. When has The Wall Street Journal ever been "more reliable" for medical facts than the NEJM? I'm not talking about breaking some medical scandal a few weeks early. I've just tried searching their online site for medical info and have been unable to find anything other than articles about how some drug approval or loss of patent is affecting some company's share price, or a paragraph on new research opportunities (that affect a company's share price). Colin° Talk 15:21, 15 May 2007 (UTC)
The statement "In some cases, ... has turned out to be more reliable than" just doesn't work. I'm sure you can find cases when it was "more accurate than" for a given topic and moment in time. But "reliable" implies one can regularly, not just occasionally, depend on it. If this was an essay on "Accurate sources" then a survey of the "extensive literature" on different sources and their quality might make an interesting read.
The other aspect we need to consider is "useful". I think the example on the Bell Curve above showed that newspapers aren't as useful a source as a journal, even if the information is technically accurate.
I'll have a look at your other points later to see what I can find. Colin° Talk 15:58, 15 May 2007 (UTC)
Here's an example of using the popular press, which I cleaned up using peer-reviewed or medical sources just last week — from an area I'm familiar with.
Note the headline — gene found !! This is what the popular press does.
When in fact ... The peer reviewed sources are available, and the finding is reported in a way that is more "scientifically" and "medically" correct. The BBC merely parroted some portions of the Duke Medical News, while adding nothing clarifying or illuminating.
{{
cite press release}}
: Check date values in: |date=
(
help)Note the more cautious and accurate headline in the Duke Medical press release, and The researchers estimate that the SLITRK1 mutations account for 5 percent of trichotillomania cases. This gene is not significant in and of itself (it is not *THE* gene that has been discovered as some earth-shattering event), as much as it provides a vehicle for future research directions. There is no need to cite the "hyped" BBC version, when the peer-reviewed article can be found in a medical library (and we don't choose our sources based on whether they are easily available online, free or not - we choose the best and highest-quality sources period, even if they're not available online, which should be the peer-reviewed medical literature).
Further, if the issue of citing the popular press is the only problem with these guidelines, we can work on the wording. But, whenever peer-reviewed sources are available, they could at least be preferred over the popular press, which tends to hype results and take them out of context or proportion. SandyGeorgia ( Talk) 16:12, 15 May 2007 (UTC)
Yes, that's what I'm trying to do. I only object to the oversimplified, unsupported statement that (1) If it appears in the popular press it's not reliable and (2) if it appears in the peer-reviewed literature it is reliable.
My position is that the popular press varies in reliability. I gave you links to a web site run by doctors and journalists, in which doctors review and evaluate the reliability of articles in the popular press. I also cited reference books that librarians use, such as Magazines for Libraries, that evaluate the reliability of popular magazines. Some popular publications and newspapers are more reliable than others.
Peer-reviewed journals are usually more reliable, but not all of them. I liked to a publication, the Brandon-Hill list, that lists the most reliable peer-reviewed medical publications. Some peer-reviewed publications are financed by drug companies, or medical device companies, and publish "peer-reviewed" articles that support the use of their products. So some peer reviwed journals are more reliable than others.
I would suggest that the guidelines include language like that above.
In general, good peer-reviewed publications are more reliable than the newspapers or popular press. But there are lots of exceptions. Even the peer-reviewed journals, like The Lancet, will publish articles that they know are wrong, because they want to get the argument out for debate, as they did with that article on rats who ate genetically modified potatoes. The BMJ (I think) published an article on mercury preservatives in vaccines which they and every legitimate doctor have repudiated, but people keep quoting it. Nbauman 21:26, 16 May 2007 (UTC)
Hello. Are the AHRQ guidelines on grading evidence too technical for inclusion here? I'd really like to see a summary of the grading as outlined in evidence-based medicine included here - I think it's important, no? Just a table like this:
Grade | Evidence | Description |
---|---|---|
A | Ia, Ib | Requires at least one RCT as part of the body of literature of overall good quality and consistency addressing the specific recommendation |
B | IIa, IIb, III | Requires availability of well-conducted clinical studies but no RCTs on the topic of recommendation |
C | IV | Requires evidence from expert committee reports or opinions and/or clinical experience of respected authorities. Indicates absence of directly applicable studies of good quality. |
Would do the trick. Thoughts? Nmg20 17:37, 15 May 2007 (UTC)
I think that would be a good and even necessary addition. You can't write about medicine if you don't understand this. Nbauman 18:17, 15 May 2007 (UTC)
Please add to the page, we can tidy/refine later. There is more than one way of grading sources. The current headings (Periodicals, Books, Online) could perhaps be demoted to 2nd level under a new "Media types" heading (or similar wording). Then we could have another top level heading for "Research types", for example.
We should link to Trish Greenhalgh's "How to Read a Paper" Series. If people read that, they'd have a good idea of different study types. Colin° Talk 22:20, 15 May 2007 (UTC)
I find that it is often easier to access reprints of conference proceedings online than complete journal articles. I'm not sure if that is only true for myself as a veterinarian or if it also applies to M.D.s. Where do these proceedings fall in the matter of reliability, in this project's opinon? Note that these are major conferences, with well-respected lecturers, and I'm only referring to reviews of topics, not new research. Thanks. -- Joelmills 03:12, 25 June 2007 (UTC)
Still easier: don't bother citing anything. (That was a joke.) I cite such sources only as a last recourse, and then only if I know the article actually was presented at the conference. Sometimes the proceedings are published before the conference, and then at the conference the article is retracted! This is more common when the proceeding volume consists of abstracts or short articles that are little more than abstracts. Some proceedings volumes are peer reviewed and/or of the highest quality, but they are in the minority, and you really have to know the specific research community to know which proceedings volumes are top-notch and which ones are not. It isn't enough to go by series title, because this can change from year to year, depending on who is the editor. -- Una Smith 15:40, 9 July 2007 (UTC)
In the spirit of being bold, I've made a few changes to the proposal. Here is a summary diff. Feel free to revert them if they seem redundant or inappropriate. In general, I'd like to be a little more explicit on the fact that primary sources (journal articles reporting original findings) are a welcome and even necessary part of medical articles, but that the interpretation of such research must hew carefully to that provided in reliable secondary sources (reviews/textbooks). I've seen significant issues with editors citing a number of basic-science journal articles and then leaping to a totally off-the-wall conclusion, which is then defended as "cited content".
Another issue is articles on supposed medical conditions which have never been reported or recognized by any medical authority (see mucoid plaque). I would favor including something in the guideline along the lines of, "If a purported medical condition, test, or treatment has been described and evaluated by the medical community, then it should be easy to cite reliable sources on the subject. In the absence of such sources, topics should not be presented as if they are accepted by the medical community." But perhaps this is overstepping the bounds of this proposed guideline. MastCell Talk 17:04, 25 June 2007 (UTC)
Discussion moved from Wikipedia talk:WikiProject Clinical medicine:
WP:MEDRS seems to be at odds with WP:MEDMOS; MEDMOS encourages the use of PubMed references, MEDRS implicitly discourages them.
WP:MEDRS states:
In my opinion:
I look forward to the discussion. My thoughts on this arose from this discussion-- and are related to changes to the McClintock effect article. Nephron T| C 06:12, 24 June 2007 (UTC)
I have been looking at WP:MEDRS, and I would be relieved to have some guidelines such as those that are listed. This would become very relevant to topics in dentistry, such as "new-and-improved" products but especially on fluoride and amalgam. It seems to me that, by far, the most important item in MEDRS is that an article must "present the prevailing medical or scientific consensus." Anything else placed in an article should be labeled as a minority view or one that is not accepted by the established consensus. As long as this principle is followed, then I do not foresee major reliability nor original research problems arising. Secondary sources can be encouraged in the guideline to make certain that medical/scientific consensus is presented, but I think the most important point to emphasize is that (regardless of the source) the content presented in the article, whether held by consensus or a minority viewpoint, must be presented as such. Saying all this, I hope this proposal can eventually be elevated to a guideline with a little work. - Dozenist talk 14:26, 25 June 2007 (UTC)
Have a look at this old version of Green tea. The FDA rejection of health benefits can be found here, which, although in the form of a letter, is the result of a serious review of the available evidence. Look how it is dismissed:
I don't know much about green tea or those studies, but heavyweight studies such as the FDA one should not generally be placed lower in the importance-hierarchy than individual research papers. It is a common misconception on WP that primary is better, probably due to the word's other uses in the English language. MEDRS must not give this impression. Colin° Talk 18:17, 25 June 2007 (UTC)
To my mind, "undergraduate medical textbooks" are absolutely appropriate as sources for medicine-related articles, and are likely to be more appropriate for an online encyclopaedia than postgraduate ones. I've reverted this change in the main article for now, but reckon it merits a discussion on here... Nmg20 ( talk) 16:01, 17 November 2007 (UTC)
I agree with almost everything you've written, and no, undergrad textbooks wouldn't normally be referred to in medical papers. However - at the risk of stating the obvious - we're not trying to produce medical review papers here, we're trying to provide a detailed-but-accessible summary of available medical information.
Undergraduate textbooks have several advantages from this point of view. They're written relatively simply, they rarely include controversial information (which is not to say that we shouldn't include such info, merely that it can be sourced elsewhere), and they are generally excellent summaries of the currently accepted medical / scientific understanding of whatever the subject they deal with is.
I'd also take issue with the idea that textbooks are rarely written by a specialist in that field. To take three textbooks I myself have used and know are commonly recommended: Obstetrics and Gynaecology is by Lawrence Impey, MRCOG and consultant obstetrician at the John Radcliffe. Neuroanatomy is by Crossman (prof in Anatomy at Manchester) and Neary (professor of neurology). Even the crap-sounding Cardiovascular system at a glance is by Aaronson (reader in pharmacology at GKT/KCL), Ward (professor of respiratory cell physiology at GKT/KCL) and Wiener (professor of medicine and physiology at Johns Hopkins). I'm not sure these are exceptions - do other contributors have views here?
I agree, however, that exam-question books are not suitable sources, and nor are cramming books - but I think we would lose out by excluding textbooks. Put it this way - people will still be using newspaper articles in medical articles here, and those are far worse secondary sources than medical textbooks... Nmg20 ( talk) 23:36, 17 November 2007 (UTC)
I seriously doubt that *any* undergraduate textbook covers Tourette syndrome accurately; I would not want to weaken this guildeine to allow their inclusion. SandyGeorgia ( Talk) 16:29, 18 November 2007 (UTC)
In some of my recent article work (e.g. subarachnoid hemorrhage, Wilson's disease, ascending cholangitis) I have found that even the best recent clinical reviews are still short of information, especially on the softer areas like quality of life and prognosis. I find myself reaching for primary sources to complement the main reviews, but I remain concerned that we are opening ourselves up to WP:SYNTH. Are there any views on this? JFW | T@lk 15:10, 18 June 2008 (UTC)
My point is that reviews sometimes don't cover the points that you'd really want to address. For instance, on SAH I wanted to mention the fact that many people with previous SAH have persistent headaches. The only evidence for this could be found in a primary research study that definitely addressed the quesiton, but is of inferior strength on our "hierarchy" of sources. JFW | T@lk 10:13, 19 June 2008 (UTC)
(Please skip down to the next sub-section for another succinct introduction.)
I think there's a ubiquitous misunderstanding of the word secondary reflected on this article. A primary study generally reviews and discusses their findings in light of prior evidence. This makes it a secondary source for information on these articles. I've noted this with a RfC at WT:NOR here, and also over at Talk:Coeliac disease#Misunderstanding of secondary in the context of primary studies and reviews. The question is: is a reviewer necessarily more credible to comment on the prior science than a researcher discussing a primary study, all else equal? I don't think so -- although there may be a small bias, I don't think the reviewer should be considered immune to these biases. Now, systematic reviews help to eliminate bias by forcing the reviewer to be precise and evaluate all studies -- but these are uncommon, and still susceptible to bias. In summary, more importance should probably be placed on the date of the publication and the comprehensiveness with which it approaches a topic. Very broad reviews are likely to miss important details which specialized papers will discuss. ImpIn | ( t - c) 06:44, 28 June 2008 (UTC)
Secondary sources are preferable to primary ones, even if we're talking about the "previous work" area of primary sources. As a practical matter, when a secondary source is reviewed, its reviewers check more carefully that it's comprehensive, neutral, etc. For a primary source, reviewers concentrate on the new results being reported, and tend to treat the previous-work section less carefully. Generally speaking, primary sources try to make a point and to advance research in a particular area, and are more prone to list previous work that agrees with them, and are less prone to list other sources that disagree; whereas secondary sources are trying to cover a topic more generally and fairly, and are a much better way to achieve NPOV. Of course, this is just a tendency, and one can find bad secondary sources and good primary ones; but it is a strong tendency and should not be ignored. Eubulides ( talk) 10:36, 30 June 2008 (UTC)
Generally speaking, primary sources try to make a point and to advance research in a particular area, and are more prone to list previous work that agrees with them, and are less prone to list other sources that disagree; whereas secondary sources are trying to cover a topic more generally and fairly, and are a much better way to achieve NPOV.
I think you're assuming bad faith. There are scientific facts which are being suppressed for a pedantic reason i.e that they are not cited in a review. The fact is that a review focused upon casein in wheat will probably never happen -- thus the likely sensitivity of coeliac patients to casein will never be mentioned. The strong finding of budenoside is similar; that study may not be replicated for another few years. The longstanding (7 years?) misunderstanding of the word secondary is not a minor issue. Nor is it minor that you seem to place a greater emphasis on a study's mention in "a high-quality review" (based on a wiki editor's opinion) than on replications. You're fine with a lot of behind the scenes editorial work in interpreting reviews which are high-quality, but when citing scientific facts stated in plain language, you seem to think it is verboten. I don't think that makes sense. The former seems more questionable to me than the latter. Interesting findings should be reported, and similar studies can be reported alongside. That's not SYNTH, that's just pointing to studies. For example, at least 4 studies have been done showing Se reducing the toxicity of MeHg (methylmercury), with 1 exception. None of these are all cited in one review; they are cited in different reviews, and the most recent (2007) in perhaps none. I think stating that "Several studies have found that Se reduces the toxicity of MeHg in rats, with an exception"[footnotes] makes sense. You apparently do not.
Also, your wording makes it unclear: when exactly could I cite a study such as the casein one? Should it be replicated once? After there's a systematic review on casein and coeliac patients? After that one study is mentioned in a review? How about the budenoside study? There is nothing in MEDRS which says you cannot cite primary studies, especially remarkable ones like these. You're actually pushing for a policy which does not exist. The current policy even says that popular press articles can sometimes be cited, and here you're fighting tooth and tail against the addition of remarkable primary studies.
Note: I stand by my censorship comment; whether the censorship is intentional or not, it amounts to censorship. In case you haven't noticed, I'm not at Wikipedia to win popularity contests. You're appealing to your own fictitious policy to keep interesting, encyclopedia worthy-content out of the encyclopedia. Further, this impels other people to do the same, and allows people to justify censorship ( recent example). Wikipedia is not conservativopedia; if Einstein had published his paper on Relativity today, we would not want to "wait until it is verified" to note it. There's no reason for that position. There's no rule that a study has to be replicated "or noted in a high-quality review" before it gets noted on Wikipedia. Sure, reviews get greater weight, but when they aren't available, individual studies are citable. Somehow MEDRS even allows for popular press and press releases, as well. II | ( t - c) 14:26, 30 June 2008 (UTC)
Here's why: it's trivially easy for anyone with slight sophistication to mine the primary "reliable" medical literature to advance whatever editorial point they like. I'm thinking of creating an article claiming that HIV cannot possibly be the cause of AIDS, sourced entirely to "reliable", Pubmed-indexed, peer-reviewed publications. It's easy with selective citation, and the only real defense is common sense - an editor's selection and presentation of primary medical studies should never contradict, supercede, or ignore syntheses by reliable third-party sources. MastCell Talk 19:13, 1 July 2008 (UTC)
Colin has admitted, as is obvious, that the "mini-reviews" in primary articles are secondary sources. To be precise in our language, secondary should not be used as a synonym for reviews. Here is what I attempted to add ( diff 1, diff 2). The final text looks as follows: (please read slowly and specifically point towards problems in the addition, and evidence supporting your conclusions if possible)
A secondary source in medicine summarizes one or more primary or secondary sources, usually to give an overview of the current understanding of a medical topic. Review articles and specialist textbooks are examples of secondary sources. A good secondary source from a reputable publisher will be written by an expert in the field and be editorially or peer reviewed. Journalists writing in the popular press, and marketing departments who issue press releases, tend to write poorer secondary source material; however, such material may be appropriate for inclusion in some contexts. (Begin addition) Primary research articles can also be secondary sources of prior literature, and are superior to the popular press in this respect. The best secondary sources are systematic reviews, which look at all available evidence on a particular topic and justify the inclusion and exclusion of evidence. After systematic reviews, preference should be given to the most up-to-date reviews and if necessary primary articles which discuss the largest range of evidence on a particular subject in the most non-technical, analytical manner.
(My addition in red.)
Eubulides reverted this. He stated that he disagreed that primary articles can be secondary sources. However, that primary articles are secondary sources is a fact, just like gravity is a fact. I pointed out that the popular press is a citable secondary source and requested that he explain how the citation and discussion of a primary study is not. He has not answered this question. My edit ranks the sources as follows: 1) systematic reviews, 2) reviews, 3) discussion in primary articles, 4) popular press. We can address the particular issue of citing primary articles separately. Let's discuss the issues with this edit right now. II | ( t - c) 23:35, 30 June 2008 (UTC)
{{
cite journal}}
: Unknown parameter |laydate=
ignored (
help); Unknown parameter |laysource=
ignored (
help); Unknown parameter |laysummary=
ignored (
help)CS1 maint: multiple names: authors list (
link)Going down the list (I was going to just interject on each point, but thought you might take offense -- let me know if you want to try that):
With the above discussion in mind, would the following change make sense? In WP:MEDRS #Article type, change from:
to:
Eubulides ( talk) 20:36, 1 July 2008 (UTC)
I don't understand how/why the introduction/discussion sections of primary articles are categorically "much less reliable" than reviews. In general, what I've read lately suggests the exact opposite. Categorical statements to this effect are misleading. Now, we can all point to examples. I can show you 3-4 poor review articles right now, out of the 5-6 that I've read lately. Many reviews, unfortunately, seem to describe primary articles briefly rather than analyzing them. The reality is that reliability is, as it should be, more connected to the author than the type of publication. This can be assessed by looking at how many papers the author has published on the topic. "Review", "primary article" -- these are simply labels. An actual example: Let's say you've got a 2006 review published by one author with 11 (second author 11) papers which discusses, among other things (as reviews frequently do) Quality of Life (QOL) in coeliac patients. It simply describes a few previous studies, noting generally that studies suggest that women suffer more after diagnosis. In 2007, a primary article on Quality of Life whose main author has published over 200 papers (the second 121, the third 303, the fourth 839), many of them on coeliac disease and several on QOL specifically. These are the premier experts of the field. He discusses the issue in detail in his paper, citing more QOL papers than the review does. He differs (refutes?) with the review, noting that women in western countries "report a lower HRQOL measured by the SF-36 than men". Is he more reliable, as an expert on QOL and coeliac disease? Why would he not be?
In general, you may have a better chance of hearing from the real experts, and hearing their in-depth analysis, in the discussion sections of their papers. The subpar "experts" may be more likely to publish reviews than to do "primary research". And these reviews cover such a wide range that often they just describe them in simple sentences, which offers no value over the article's own abstract. II | ( t - c) 04:30, 2 July 2008 (UTC)
[Outdent]If you cut the "far", then I'd be more inclined to support it. Also, it could be worded better: "Research papers are primary souces, although they are secondary sources in their discussion of prior literature. In this respect they are typically less reliable than reviews because they cover less sources (?)." However, that section is not the right section to be discussing the reliability of different article types. Why don't we have a section focused upon reliability of different article types? Also, that section (/Wikipedia:WikiProject_Medicine/Reliable_sources#Article_type) has a factual error: reviews are more likely to contain original research than systematic reviews. Systematic reviews are highly unlikely to contain original research. Reviews can be variable -- some take a bunch of articles and come to a novel conclusion based upon that literature. Systematic reviews simply analyze the rigorousness overall conclusion of studies on a narrow topic. This error should be fixed.
In fact, this entire article is rather redundant and scattered, and could use some serious copyediting. I'll do some after we resolve this, and we can deal with it per BRD. II | ( t - c) 06:59, 2 July 2008 (UTC)
Using a PubMed count to establish relative reliability is a phenomenally bad idea. It's like using the number of books an author has published to establish how good of a writer they are. David Reardon publishes far more on abortion and mental health than nearly anyone else, but his findings are minoritarian if not discredited in the field. If there is really a head-to-head battle between the findings of two primary articles, then the "referee" should be found in summaries and syntheses of evidence by expert panels, major professional groups, or in review articles published in reputable, high-impact journals. It's not complicated unless we make it so. MastCell Talk 19:30, 2 July 2008 (UTC)
http://jama.ama-assn.org/cgi/content/full/300/1/98 - an excellent set of instructions for people wanting to submit letters to JAMA. I think numerous points in that article are readily applicable to this policy. JFW | T@lk 08:29, 2 July 2008 (UTC)
I suggested to Eubulides that he look at what the academic community says about reviews. I've found some studies. PMID 1834807 (1991) discusses a system used to rate reviews. This could be of significant use for us, as we need to evaluate reviews. It would be interesting to see where this has gone. Related links in PubMed has a vast amount of related articles. PMID 9496383 (1997) finds that most reviews are hardly systematic (this is a bad thing). PMID 10610646 (1999 - free access) finds the same thing. PMID 17606172 (2007) focuses on meta-analysis, but finds improvement. PMID 16277721 (2005) says meta-analyses are generally poor. PMC 1602036 (2006) evaluates Cochrane reviews vrs industry reviews -- obviously, industry reviews are worse. PMID 9092319 (1997) is a guide for finding systematic reviews. PMC 2379630 (1993) specifically compares OR and reviews. It notes that the answers provided by broad reviews should not be accepted uncritically as valid. Conclusion: Certainly, as my original edit to MEDRS reflects, reviews should generally get priority over primary articles -- but people need to recognize the difference in reviews. Most reviews I've seen are not systematic. Here is an example of an overly broad review. These reviews are less reliable than OR in many cases, since they are often both written by an outsider and give cursory attention to many complex issues. Eubulides has argued that systematic reviews should not get priority; this is directly contradicted by the scientists, and does not make good sense. As I stated earlier, it should go: 1) systematic reviews; 2) good, preferably quasi-systematic -- ie a review which states its methods for including literature 3); OR/broad reviews; 4) popular press/press releases. II | ( t - c) 10:08, 2 July 2008 (UTC)
JFW: you seem to conflate meta-analysis and systematic review. The lead to systematic reviews not misleading, although it is not sourced. Systematic reviews are considered the top in quality in medical science. You're writing contrary to much evidence presented (above). All of them say that "reviews need to be systematic". Otherwise you're at the mercy of prejudices -- you don't know how much they've just grabbed what they want you to hear. Asking specific questions is a good thing in a review, because you can't cover a ton of questions well -- they is just too many studies. As far as your assumptions of bad faith -- well, they are what they are: uncivil assumptions of bad faith. II | ( t - c) 23:34, 2 July 2008 (UTC)
Eubulides: It backs up my assertion that overly broad, non-analytical, and non-systematic (low-quality) reviews are little better than original research. Reviews should be ranked. There are specific, concrete criteria for evaluating reviews, which that paper lists. When a review fails those criteria, it is not much better than an original research article. I suggest that we incorporate the basic review assessments that that article proposes into MEDRS: 1) Is the question clearly defined? 2) Does the review focus on a specific question? 3) Is the author obviously biased? 4) Are the methods used to gather articles described? 5) Are references scanty? 6) Are the primary studies critically appraised? 7) Are there research design and population described? These are a good start in telling people how to analyze reviews. It's not enough to simply say "use high-quality reviews". Distinguishing between high-quality and low-quality reviews is possible, and should be done. I believe low-quality reviews are typically little better than OR, but obviously there's wide variations. II | ( t - c) 02:08, 3 July 2008 (UTC)
{{
cite journal}}
: CS1 maint: multiple names: authors list (
link)The Cochrane Collaboration has various set of criteria for evaluating studies. Those criteria may be useful models here. Those criteria do not include impact factor of the journal in which the study appears. Setting criteria is a very active area of work, as the best choice of criteria is an open problem. Try a Google search for "site:cochrane.org criteria" to find a slew of conference abstracts. -- Una Smith ( talk) 15:19, 4 July 2008 (UTC)
I'd like to venture a comment here, even though I'm coming in late on the conversation. it seems to me that the point behind this 3rd-hand, secondary source prescription is that we want to make sure that the views we use reflect a general consensus within a significant portion of a field. reviews as a rule are neither brilliant nor innovative, and it's precisely those lacks that make them useful in WP - they usually reflect a nice run-of-the-mill consensus in the discipline. the 'literature' sections of primary research, by contrast, may (and often do) include recent, innovative, primary research material by other authors doing similar work. there's no question that primary research authors cherry-pick their sources for the purposes of support, criticism, or relevance to their own work. primary researchers are trying to manufacture or influence the current understanding in their field - that's what research is for - and so it's not at all clear to me that primary research will make clear distinctions between the actual current understandings in the field and the author's personal perceptions of what the field should understand.
if Wikipedia has to wait seven years for a result to become fully accepted by the medical community, then Wikipedia should wait seven years. best not to get ahead of scientific consensus... -- Ludwigs2 09:39, 7 July 2008 (UTC)
Is there any way to give a rule of thumb for how recent sources should be? How old is too old? Are sources from the mid or early '90s OK? I guess this would vary based on the subject and how fast it's developing. Any advice about how to gauge this? delldot talk 15:29, 10 July 2008 (UTC)
Here are some rules of thumb for keeping an article up-to-date while maintaining the more-important goal of reliability. These guidelines are appropriate for actively-researched areas with many primary sources and several reviews, and may need to be relaxed in areas where little progress is being made and few reviews are being published.
These are just rules of thumb. There are exceptions:
{{
cite journal}}
: CS1 maint: multiple names: authors list (
link)(end of draft) Eubulides ( talk) 17:08, 10 July 2008 (UTC)
I have some significant concerns about this, perhaps mostly because of the way "rules of thumb" turn into sweeping, iron-clad requirements after a few months, and perhaps because I get the feeling that none of you have any connections to people who write reviews and therefore put too much faith in them.
Sure, if you're only working on articles about congestive heart failure and colon cancer and other common conditions, then the concepts here make a great starting point. However, this isn't going to work at all for very rare diseases, where a well-written case study from twenty years ago may actually be your most reliable source. Consider ODDD. I know: you've never heard of it. But go search for oculodentodigital at pubmed.gov, and limit your search to the last five years. You'll get thirty-five (35) papers. The only "review" on the disease in the last five years (as opposed to the genetics and physiology that underlie the disease) is actually a case study involving three patients. It's dated 2004. I don't expect a better review to appear in 2009, or even by 2014. My expectation is based on the fact that there have apparently never been any proper reviews published for this condition. And what is the first thing the editor reads here? "Do not cite primary sources" -- the only sources that exist for this disease.
This also isn't going to work well for many aspects of uncommon diseases. For example: consider some third-string treatment for an uncommon cancer. You've got a twenty-year old paper that gives you a success rate. It's the only randomized controlled study ever done using the specific treatment in this specific cancer. The recent review cites this paper and summarizes the conclusions in two words: "poor prognosis." According to this, the actual survival rate is suddenly not important, because the study was done before the review, and the review doesn't re-report the actual numbers. Is that what you really want? To put an expiration date on data?
I also think that citing any study that is mentioned favorably in recent reviews should be acceptable. For one thing, we get more detailed articles that way. For another, if the original article is retracted, then we know what we need to change. A review that cites Hwang Woo-Suk favorably is not going to be retracted just because the world later discovered that this Korean scientist fabricated much of his stem cell research.
As ImpIn points out, this scheme works poorly in cases where the recent reviews only cover certain aspects of a disease. I frequently see very good reviews in terms of treatment, and that also completely neglect epidemiology. It's hard to find epidemiological information for less developed countries. Sometimes the best we can do is a rather old paper. The fact that an American or European author skips over the prevalence of a disease in Africa or South Asia doesn't mean that this kind of information unimportant for our worldwide encyclopedia: it means that the review is incomplete. In very common diseases, nearly all of the reviews are deliberately incomplete: you'd write a review on a specific aspect or sub-type of hypertension, because otherwise your review would be the length of a book. I won't say that the authors are necessarily biased because of this -- but reviews cannot be assumed to be complete.
Finally, this advice is completely wrong for history sections, for what ought to be perfectly obvious reasons.
Yes, I know: you only meant this to apply to certain "actively-researched areas with hundreds of primary sources and dozens of reviews". But it's not actually that obvious to those who don't already know what you intended to accomplish. The first thing the editor reads is "Do not cite primary sources." As written, I don't think that this communicates what I think you want to say.
I don't mind stating a general preference for recent reviews, although I still prize editor judgement and a good final product over mindless compliance with rules. I could probably support a system of rules like this if it were clearly stated that this guidance only applies to the sections of an article that deal with current practice in diseases where proper reviews are readily available. I might also add that primary papers aren't bad in themselves, so long as they don't actively contradict all of the recent reviews. Fundamentally, I think that if we're going to publish this, then the caveats and restrictions need to go first, not last, and they need to be stated more strongly than the guidance. For example, "Do not cite primary sources..." should be "Consider citing a recent, comprehensive review in a reputable journal instead of older primary sources." The section might begin with the sentence about this advice only applying to articles on actively-researched areas with hundreds of primary sources and dozens of reviews, although the general principles might be applicable in some less common diseases. WhatamIdoing ( talk) 02:52, 11 July 2008 (UTC)
I'm glad of WhatamIdoing's comments and the changes made. I think we can sometimes concentrate too much on the big diseases that attract controversy and edits from POV pushers. Wrt citing primary sources for studies you wish to comment on, I have found it useful to use the following style:
In effect, the primary source is being used purely to show the study took place and to act as a footnote for the reader should they wish to read the primary material about the study. The secondary source is used to back up the conclusions of the study. My preference is to restrict the explicit mention of studies (the History section is one obvious example) since if the results of the study are now accepted widely, then they can just be stated as facts. Colin° Talk 11:22, 11 July 2008 (UTC)
I agree with citing both in many cases, and I said exactly this above: "It might be best to cite the paper and the review, or something, but it is misleading to cite a review for a statement which is actually just being repeated [from a primary study]." I know that the APA parenethical citation style encourages you to cite the original source being cited whenever possible, and I imagine a similar practice is at least somewhat encouraged in footnote referencing, because it's much better for the readers to know the original source of an assertion. As my quote shows, I also agree with Steve -- in less controversial articles, citing key primary articles is the appropriate way to go unless the review is doing some critical appraisal, synthesizing several studies -- and often reviews are not doing critical appraisal, but rather just listing reviews. II | ( t - c) 23:33, 11 July 2008 (UTC)
WhatamIdoing, "the primary reason that reviewers don't list every single study, with a rationale for including or excluding it, is space constraint. You would spend pages and pages just listing articles for celiac disease." Couple points: good reviews should be focused for this reason, and it is not that hard to cover all the articles once you've made your question specific, because one can group them like "Several studies found such and such"(1-6). The primary reason that most studies do not list all articles is 1) poor research, 2) overly broad focus (look for a more specific reiew), or 3) bias. If you want examples, I've seen plenty. Look at my section above. There are also academic articles which state that reviews with a specific questions are preferred, and the Cochrane reviews follow this guideline as well (browse through them). II | ( t - c) 01:20, 13 July 2008 (UTC)
I think the debate over review quality has been done to death and there is very little us, as Wikipedian's, can do about improving the literature. We use the best sources we can. Discussions over whether this or that review is biased should be taken to the relevant article's talk page. II, I would take your lecture on what makes a good review, and how we can identify bias more seriously were it not for this diff proudly displayed on your user page. I particularly enjoyed the "Other studies have found that coconut oil can help in weight loss and poison recovery." statement and sourcing. Colin° Talk 21:01, 13 July 2008 (UTC)
Discussion on the draft itself seems to have died down, so I added it, except that I omitted the detailed example of citing a Cochrane review, which on rereading didn't seem to be worth all that space on the project page. If someone else thinks that example is worth while please feel free to add it of course. Eubulides ( talk) 18:03, 14 July 2008 (UTC)