Holistic grading or holistic scoring, in standards-based education, is an approach to scoring essays using a simple grading structure that bases a grade on a paper's overall quality. [1] This type of grading, which is also described as nonreductionist grading, [2] contrasts with analytic grading, [3] which takes more factors into account when assigning a grade. Holistic grading can also be used to assess classroom-based work. Rather than counting errors, a paper is judged holistically and often compared to an anchor paper to evaluate if it meets a writing standard. [4] It differs from other methods of scoring written discourse in two basic ways. It treats the composition as a whole, not assigning separate values to different parts of the writing. And it uses two or more raters, with the final score derived from their independent scores. Holistic scoring has gone by other names: "non-analytic," "overall quality," "general merit," "general impression," "rapid impression." Although the value and validation of the system are a matter of debate, holistic scoring of writing is still in wide application.
In holistic scoring, two or more raters independently assign a single score to a writing sample. Depending on the evaluative situation, the score will vary (e.g., "78," "passing." "deserves credit," "worthy of A-level," "very well qualified"), but each rating must be unitary. If raters are asked to consider or score separate aspects of the writing (e.g., organization, style, reasoning, support), their final holistic score is not mathematically derived from that initial consideration or those scores. Raters are first calibrated as a group so that two or more of them can independently assign the final score to writing sample within a pre-determined degree of reliability. The final score lies along a pre-set scale of values, and scorers try to apply the scale consistently. The final score for the piece of writing is derived from two or more independent ratings. Holistic scoring is often contrasted with analytic scoring. [5] [6] [7]
![]() | This section is written like a
personal reflection, personal essay, or argumentative essay that states a Wikipedia editor's personal feelings or presents an original argument about a topic. (April 2022) |
The composing of extended pieces of prose has been required of workers in many salaried walks of life, from science, business, and industry to law, religion, and politics. [8] Competence in writing extended prose has also formed part of qualifying or certification tests for teachers, public servants, and military officers. [9] [10] Consequently, the teaching of writing is part of formal education in school and, in the US, in college. How can that competence in composing extended prose be best evaluated? Isolated parts of it can be tested with "objective", short-answer items: correct spelling and punctuation, for instance. Such items are scored with high degrees of reliability. But how well do item questions evaluate potential or accomplishment in writing coherent and meaningful extended passages? Testing candidates by having them write pieces of extended discourse seems a more valid evaluation method. That method, however, raises the issue of reliability. How reliably can the worth of a piece of writing be judged among readers and across assessment episodes? Teachers and other judges trust their knowledge of the subject and their understanding of good and bad writing, yet this trust in "connoisseurship" [11] has long been questioned. Equally knowledgeable connoisseurs have been shown to give widely different marks to the same essays. [12] [13] [14] [15] Holistic scoring, with its attention to both reliability and validity, offers itself as a better method of judging writing competence. With attention to fairness, it can also focus on consequences of score use. [16]
While analytic grading involves criterion-by-criterion judgments, holistic grading appraises student works as integrated entities. In holistic grading, the learner's performance is approached as one and cannot be reduced or divided into several component performances. [17] Here, teachers are required to consider specific aspects of the student's answer as well as the quality of the whole. [18]
Holistic grading operates by distinguishing satisfactory performance from one that is simply adequate or outstanding. [2]
Although a wide variety of procedures for holistic scoring have been tried, four forms have established distinct traditions. [19]
Pooled-rater scoring typically uses three to five independent readers for each sample of writing. Although the scorers work from a common scale of rates, and may have a set of sample papers illustrating that scale ("anchor papers" [20]), usually they have had a minimum of training together. Their scores are simply summed or averaged for the sample's final score. In Britain, pooled-rater holistic scoring was first experimentally tested in 1934, employing ten teacher-raters per sample. [21] It was first put into practice with 11+ examination scripts in Devon in 1939 using four teachers per essay. [22] In the United States its rater reliability was validated from 1961 to 1966 by the Educational Testing Service; [23] and it was used, sporadically, in the Educational Testing Service's English Composition Test from 1963 to 1992, employing from three to five raters per essay. [24] A nearly synonymous term for "pooled-rater score" is "distributive evaluation" [25]
Trait-informed scoring trains raters to score to a scoring guide (also called a "rubric" [26] or "checklist" [27])—a short set of writing criteria each scaled in grid format to the same number of accomplishment levels. For instance, the scoring guide used in a 1969 City University of New York study of student writing had five criteria (ideas, organization, sentence structure, wording, and punctuation/mechanics/spelling) and three levels (superior, average, unacceptable). [28] The rationale for scoring guides argues that it forces scorers to attend to a spread of writing accomplishments and not give undue influence to one or two (the " halo effect"). Trait-informed scoring comes close to analytic scoring methods that have raters score each trait independently of the other traits and then add up the scores for a final mark, as in the Diederich scale. [29] Trait-informed holistic scoring, however, remains holistic at heart and asks raters only to take into some account all the traits before deciding on a single final score.
Adjusted-rater scoring assumes that some scorers are more accurate in their scores than other raters. Each paper is read independently by two raters and if their scores disagree to a certain extent, usually by more than one point on the rating scale, then the paper is read by a third, more experienced reader. Scorers who cause too many third readings are sometimes re-trained during the scoring session, sometimes dropped out of the reading corps. [30] [31] Adjusted-rater holistic scoring may have first been applied by the Board of Examiners for The College of the University of Chicago in 1943. [32] Today large-scale commercial testing services sometimes use adjusted-rater scoring where one rater for an essay is a trained human and the other a computer programmed for automatic essay scoring, for instance GRE testing. [33] [34]
Single-rater monitored scoring trains raters as a group and may provide them with a detailed marking scheme. Each writing sample is scored, however, by only one rater unless, through periodic checking by a monitor, its score is deemed outside the range of acceptability and then it is re-rated, usually by the supervisor. This method, called "single marking" or "sampling" has long been standard in Great Britain school examinations, even though it has been shown to be less valid than double marking or multiple marking. [35] [36] In the United States, for the Writing Section of the TOEFLiBT, [37] the Educational Testing Service now uses the combination of automated scoring and a certified human rater.
In Great Britain, formal pooled-rater holistic scoring was proposed as early as 1924 [38] and formally tested in 1934–1935. [39] It was first applied in 1939 by Chief Examiner R. K. Robertson to 11+ scripts in the Local Examination Authority of Devon, England, and continued there for ten years. [40] Although other LEAs in Great Britain tried the system during the 1950s and 1960s and its reliability and validity was much studied by British researchers, it failed to take hold. Multiple marking of school scripts, usually written to show competence in subject areas, largely gave way to single-rater monitored scoring with analytical marking schemes. [41] [42]
In the US, the first applied holistic scoring of writing samples was administered by Paul B. Diederich at The College of the University of Chicago as a comprehensive examination for credit in the first-year writing course. The method was adjusted-rater scoring with teachers of the course as scorers and members of the Board of Examiners as adjusters. [43] [44] Around 1956 the Advanced Placement examination of the College Board began an adjusted-rater holistic system to score essays for advance English credit. Raters were high-school teachers, who brought the rating system back to their schools. [45] One teacher was Albert Lavin, who installed similar holistic scoring at Sir Francis Drake High School in Marin County, California, 1966–1972, at grades 9, 10, 11, and 12 in order to show progress in school writing over those years. [46] In 1973 teachers in the California State University and Colleges system used the Advanced Placement adjusted-rater system to score essays written by matriculating students for advance English composition credit. [47] Pooled-rater holistic scoring was tested as early as 1950 by the Educational Testing Service (using the term "wholistic"). [48] It was first applied in the College Board's 1963 English Composition Test. [49] In higher education, the Georgia Regents' Testing Program, a rising-junior test for language skills, used it as early as 1972. [50]
In the USA an exponential spread in holistic scoring took place from around 1975 to 1990, fueled in part by the educational accountability movement. In 1980 assessment of school writing was being conducted in at least 24 states, the large majority by writing samples rated holistically. [51] In post-secondary education, more and more colleges and universities were using holistic scoring for advance credit, placement into first-year writing courses, exit from writing courses, and qualification for junior status and for undergraduate degree. Writing teachers were also instructing their students in holistic scoring so they could judge one another's writing—a pedagogy taught in National Writing Projects. [52]
Beginning in the last two decades of the 20th century use of holistic scoring somewhat declined. Other means of rating a student's writing competence, perhaps more valid, were becoming popular, such as portfolios. College were turning more and more to testing agencies, such as ACT and ETS, to do scoring of writing samples for them, and by the first decade of the 21st century those agencies were doing some of that by automatic essay scoring. But holistic scoring of essays by humans is still applied in large-scale commercial tests such as the GED, TOEFL iBT, and GRE General Test. It is also used for placement or academic progression in some institutions of higher education, for instance at Washington State University. [53] For admission and placement into writing courses, however, most colleges now rely on the analytical scoring of writing skills in tests such as ACT, SAT, CLEP, and International Baccalaureate.
Holistic scoring is often validated by its outcomes. Consistency among rater scores, or "rater reliability," has been computed by at least eight different formulas, among them percentage of agreement, Pearson's r correlation coefficient, the Spearman-Brown formula, Cronbach's alpha, and quadratic weighted kappa. [54] [55] Cost of scoring can be calculated by measuring average time raters spend on scoring a writing sample, the percent of samples requiring a third reading, or the expenditure on stipends for raters, salary of session leaders, refreshments for raters, machine copying, room rental, etc. Occasionally, especially with high-impact uses such as in standardized testing for college admission, efforts are made to estimate the concurrent validity of the scores. For instance in an early study of the General Education Development test (GED), the American Council on Education compared an experimental holistic essay score with the existing multiple-choice score and found that the two scores measured somewhat different sets of skills. [56] More often, predictive validity is measured by comparing a school student's holistic score with later achievement in college courses, usually first-semester GPA, end-of-course grade in a first-year writing course, or teacher opinion of the student's writing ability. These correlations are usually low to moderate. [57]
Holistic scoring of writing attracted adverse criticism almost from the beginning. In the 1970s and 1980s and beyond, the criticism grew. [58] [59] [60] [61]
Many institutions use holistic grading when evaluating student writing as part of a graduation requirement. [3] Some examples include:
Holistic grading or holistic scoring, in standards-based education, is an approach to scoring essays using a simple grading structure that bases a grade on a paper's overall quality. [1] This type of grading, which is also described as nonreductionist grading, [2] contrasts with analytic grading, [3] which takes more factors into account when assigning a grade. Holistic grading can also be used to assess classroom-based work. Rather than counting errors, a paper is judged holistically and often compared to an anchor paper to evaluate if it meets a writing standard. [4] It differs from other methods of scoring written discourse in two basic ways. It treats the composition as a whole, not assigning separate values to different parts of the writing. And it uses two or more raters, with the final score derived from their independent scores. Holistic scoring has gone by other names: "non-analytic," "overall quality," "general merit," "general impression," "rapid impression." Although the value and validation of the system are a matter of debate, holistic scoring of writing is still in wide application.
In holistic scoring, two or more raters independently assign a single score to a writing sample. Depending on the evaluative situation, the score will vary (e.g., "78," "passing." "deserves credit," "worthy of A-level," "very well qualified"), but each rating must be unitary. If raters are asked to consider or score separate aspects of the writing (e.g., organization, style, reasoning, support), their final holistic score is not mathematically derived from that initial consideration or those scores. Raters are first calibrated as a group so that two or more of them can independently assign the final score to writing sample within a pre-determined degree of reliability. The final score lies along a pre-set scale of values, and scorers try to apply the scale consistently. The final score for the piece of writing is derived from two or more independent ratings. Holistic scoring is often contrasted with analytic scoring. [5] [6] [7]
![]() | This section is written like a
personal reflection, personal essay, or argumentative essay that states a Wikipedia editor's personal feelings or presents an original argument about a topic. (April 2022) |
The composing of extended pieces of prose has been required of workers in many salaried walks of life, from science, business, and industry to law, religion, and politics. [8] Competence in writing extended prose has also formed part of qualifying or certification tests for teachers, public servants, and military officers. [9] [10] Consequently, the teaching of writing is part of formal education in school and, in the US, in college. How can that competence in composing extended prose be best evaluated? Isolated parts of it can be tested with "objective", short-answer items: correct spelling and punctuation, for instance. Such items are scored with high degrees of reliability. But how well do item questions evaluate potential or accomplishment in writing coherent and meaningful extended passages? Testing candidates by having them write pieces of extended discourse seems a more valid evaluation method. That method, however, raises the issue of reliability. How reliably can the worth of a piece of writing be judged among readers and across assessment episodes? Teachers and other judges trust their knowledge of the subject and their understanding of good and bad writing, yet this trust in "connoisseurship" [11] has long been questioned. Equally knowledgeable connoisseurs have been shown to give widely different marks to the same essays. [12] [13] [14] [15] Holistic scoring, with its attention to both reliability and validity, offers itself as a better method of judging writing competence. With attention to fairness, it can also focus on consequences of score use. [16]
While analytic grading involves criterion-by-criterion judgments, holistic grading appraises student works as integrated entities. In holistic grading, the learner's performance is approached as one and cannot be reduced or divided into several component performances. [17] Here, teachers are required to consider specific aspects of the student's answer as well as the quality of the whole. [18]
Holistic grading operates by distinguishing satisfactory performance from one that is simply adequate or outstanding. [2]
Although a wide variety of procedures for holistic scoring have been tried, four forms have established distinct traditions. [19]
Pooled-rater scoring typically uses three to five independent readers for each sample of writing. Although the scorers work from a common scale of rates, and may have a set of sample papers illustrating that scale ("anchor papers" [20]), usually they have had a minimum of training together. Their scores are simply summed or averaged for the sample's final score. In Britain, pooled-rater holistic scoring was first experimentally tested in 1934, employing ten teacher-raters per sample. [21] It was first put into practice with 11+ examination scripts in Devon in 1939 using four teachers per essay. [22] In the United States its rater reliability was validated from 1961 to 1966 by the Educational Testing Service; [23] and it was used, sporadically, in the Educational Testing Service's English Composition Test from 1963 to 1992, employing from three to five raters per essay. [24] A nearly synonymous term for "pooled-rater score" is "distributive evaluation" [25]
Trait-informed scoring trains raters to score to a scoring guide (also called a "rubric" [26] or "checklist" [27])—a short set of writing criteria each scaled in grid format to the same number of accomplishment levels. For instance, the scoring guide used in a 1969 City University of New York study of student writing had five criteria (ideas, organization, sentence structure, wording, and punctuation/mechanics/spelling) and three levels (superior, average, unacceptable). [28] The rationale for scoring guides argues that it forces scorers to attend to a spread of writing accomplishments and not give undue influence to one or two (the " halo effect"). Trait-informed scoring comes close to analytic scoring methods that have raters score each trait independently of the other traits and then add up the scores for a final mark, as in the Diederich scale. [29] Trait-informed holistic scoring, however, remains holistic at heart and asks raters only to take into some account all the traits before deciding on a single final score.
Adjusted-rater scoring assumes that some scorers are more accurate in their scores than other raters. Each paper is read independently by two raters and if their scores disagree to a certain extent, usually by more than one point on the rating scale, then the paper is read by a third, more experienced reader. Scorers who cause too many third readings are sometimes re-trained during the scoring session, sometimes dropped out of the reading corps. [30] [31] Adjusted-rater holistic scoring may have first been applied by the Board of Examiners for The College of the University of Chicago in 1943. [32] Today large-scale commercial testing services sometimes use adjusted-rater scoring where one rater for an essay is a trained human and the other a computer programmed for automatic essay scoring, for instance GRE testing. [33] [34]
Single-rater monitored scoring trains raters as a group and may provide them with a detailed marking scheme. Each writing sample is scored, however, by only one rater unless, through periodic checking by a monitor, its score is deemed outside the range of acceptability and then it is re-rated, usually by the supervisor. This method, called "single marking" or "sampling" has long been standard in Great Britain school examinations, even though it has been shown to be less valid than double marking or multiple marking. [35] [36] In the United States, for the Writing Section of the TOEFLiBT, [37] the Educational Testing Service now uses the combination of automated scoring and a certified human rater.
In Great Britain, formal pooled-rater holistic scoring was proposed as early as 1924 [38] and formally tested in 1934–1935. [39] It was first applied in 1939 by Chief Examiner R. K. Robertson to 11+ scripts in the Local Examination Authority of Devon, England, and continued there for ten years. [40] Although other LEAs in Great Britain tried the system during the 1950s and 1960s and its reliability and validity was much studied by British researchers, it failed to take hold. Multiple marking of school scripts, usually written to show competence in subject areas, largely gave way to single-rater monitored scoring with analytical marking schemes. [41] [42]
In the US, the first applied holistic scoring of writing samples was administered by Paul B. Diederich at The College of the University of Chicago as a comprehensive examination for credit in the first-year writing course. The method was adjusted-rater scoring with teachers of the course as scorers and members of the Board of Examiners as adjusters. [43] [44] Around 1956 the Advanced Placement examination of the College Board began an adjusted-rater holistic system to score essays for advance English credit. Raters were high-school teachers, who brought the rating system back to their schools. [45] One teacher was Albert Lavin, who installed similar holistic scoring at Sir Francis Drake High School in Marin County, California, 1966–1972, at grades 9, 10, 11, and 12 in order to show progress in school writing over those years. [46] In 1973 teachers in the California State University and Colleges system used the Advanced Placement adjusted-rater system to score essays written by matriculating students for advance English composition credit. [47] Pooled-rater holistic scoring was tested as early as 1950 by the Educational Testing Service (using the term "wholistic"). [48] It was first applied in the College Board's 1963 English Composition Test. [49] In higher education, the Georgia Regents' Testing Program, a rising-junior test for language skills, used it as early as 1972. [50]
In the USA an exponential spread in holistic scoring took place from around 1975 to 1990, fueled in part by the educational accountability movement. In 1980 assessment of school writing was being conducted in at least 24 states, the large majority by writing samples rated holistically. [51] In post-secondary education, more and more colleges and universities were using holistic scoring for advance credit, placement into first-year writing courses, exit from writing courses, and qualification for junior status and for undergraduate degree. Writing teachers were also instructing their students in holistic scoring so they could judge one another's writing—a pedagogy taught in National Writing Projects. [52]
Beginning in the last two decades of the 20th century use of holistic scoring somewhat declined. Other means of rating a student's writing competence, perhaps more valid, were becoming popular, such as portfolios. College were turning more and more to testing agencies, such as ACT and ETS, to do scoring of writing samples for them, and by the first decade of the 21st century those agencies were doing some of that by automatic essay scoring. But holistic scoring of essays by humans is still applied in large-scale commercial tests such as the GED, TOEFL iBT, and GRE General Test. It is also used for placement or academic progression in some institutions of higher education, for instance at Washington State University. [53] For admission and placement into writing courses, however, most colleges now rely on the analytical scoring of writing skills in tests such as ACT, SAT, CLEP, and International Baccalaureate.
Holistic scoring is often validated by its outcomes. Consistency among rater scores, or "rater reliability," has been computed by at least eight different formulas, among them percentage of agreement, Pearson's r correlation coefficient, the Spearman-Brown formula, Cronbach's alpha, and quadratic weighted kappa. [54] [55] Cost of scoring can be calculated by measuring average time raters spend on scoring a writing sample, the percent of samples requiring a third reading, or the expenditure on stipends for raters, salary of session leaders, refreshments for raters, machine copying, room rental, etc. Occasionally, especially with high-impact uses such as in standardized testing for college admission, efforts are made to estimate the concurrent validity of the scores. For instance in an early study of the General Education Development test (GED), the American Council on Education compared an experimental holistic essay score with the existing multiple-choice score and found that the two scores measured somewhat different sets of skills. [56] More often, predictive validity is measured by comparing a school student's holistic score with later achievement in college courses, usually first-semester GPA, end-of-course grade in a first-year writing course, or teacher opinion of the student's writing ability. These correlations are usually low to moderate. [57]
Holistic scoring of writing attracted adverse criticism almost from the beginning. In the 1970s and 1980s and beyond, the criticism grew. [58] [59] [60] [61]
Many institutions use holistic grading when evaluating student writing as part of a graduation requirement. [3] Some examples include: