Using Assessment Rubrics in the Classroom

I have discussed this on the Non Contact Time podcast - listen below (starting at approx 35 mins)





History of the rubric

Like much in education, “rubric” is a much over-used and mis-used term. It can mean a rough guide to the standard required of a piece of work, it can use unqualified terms such as “required standard”, it can be frustratingly vague “competent delivery” or, equally, it can be hyper-specific in its detail. If it is so ill-defined, what actually is a rubric and how can we use it effectively in the classroom?


Popham (1997) proposed that a rubric should consist of:

(1) evaluative criteria, 

(2) quality definitions for those criteria at particular levels, and 

(3) a scoring strategy. 


Johnson and Svingby (2007) suggest that there are 4 types of rubric: analytic, holistic, generic and task-specific. According to Reddy and Andrade (2010) you can further categorise these by those which are teacher-created and those which are co-created with students. 

Holistic: takes everything into account and then arrives at 1 grade

Analytical: all aspects are marked individually and then combined and/or averaged to reach 1 final grade (Popham, 1997). 

Generic: An overview of the whole course, perhaps

Task-specific: Criteria provided which is individual to each task


Johnson and Svingby also summarise their conclusions thus:

“(1) the reliable scoring of performance assessments can be enhanced by the use of rubrics, especially if they are analytic, topic-specific, and complemented with exemplars and/or rater training; 

(2) rubrics do not facilitate valid judgment of performance assessments per se. However, valid assessment could be facilitated by using a more comprehensive framework of validity when validating the rubric; 

(3) rubrics seem to have the potential of promoting learning and/or improve instruction. The main reason for this potential lies in the fact that rubrics make expectations and criteria explicit, which also facilitates feedback and self-assessment.” (Johnson and Svingby, 2007)


Using rubrics

Who are the rubrics for? Torrance (2007) debates about secrecy vs sharing with students. If the criteria is not shared with students then there can be no formative element to it and it’s application then becomes entirely summative. If teachers share the information with pupils in advance of the assessment then it forms part of the learning process and becomes both formative and, eventually, summative. 


So, once you’ve decided to use rubrics in your department, how do you ensure accuracy of application? Torrance (2007) found that quality descriptors which require a qualitative judgement can result in variance in marking standards between assessors. Tierney and Simon (2004) suggest that exemplar work is therefore useful if the rubric isn’t very specific, and could even be a marked example. Very similar to what we do with GCSE and A level work, especially in subjects such as English, Drama, Languages where extended writing is required. 


As with any new initiative in a department it has to be manageable and easy to implement. If the rubric is too large and complicated then teachers won’t use it. It has to be succinct enough to be useful on a regular basis but detailed enough to give accurate and useful information. I favour a combined approach of both generic and task-specific rubrics, with an overarching generic rubric which covers 11 key musical skills across EYFS - Y8 and task-specific rubrics for the performance skills assessed in each unit. Each of those skills on the generic rubric can then be mapped to specific teaching & learning activities, when in turn generate assessment activities, each of which has its own task-specific rubric per unit. 

Basic layout of a rubric

I would urge some caution in the use of rubrics though. Torrance (2007) describes the journey from “assessment of learning” through “assessment for learning” to “assessment as learning” and warns that we can become too focussed on ‘teaching to the test’ rather than teaching the content. The rubric itself should not replace high quality teaching and learning, but merely help guide the consistent assessment of a well designed task. 


Examples of rubrics

Generic (EYFS-Y8)

Click here to open the file


Incidentally, the above “generic rubric” doesn’t actually become a rubric until you apply a scoring system to it. In this case, it is measured out of 3 - “working towards (1 mark)”, “working at (2 marks)”, and “working beyond” (3 marks).

A year 8 student who was consistently working at or above the expected standard would therefore score between 22 and 33 on this rubric. Once all the criteria have been assessed, all students would score between 11 and 33. A score of below 11 would indicate that not all of the criteria have been assessed. 


Each generic criteria or “strand” as I call them is then assessed through one or more assessment tasks throughout the year - in Music, we can assess the performance skills across a number of different instruments and so the same “strand” is assessed multiple times and an average mark can be taken at the end of the year. That average mark will then inform the final overall assessment of the “strands” - is the student “working towards (1 mark)”, “working at (2 marks)”, and “working beyond” (3 marks) the standard we expect for that year group. These assessment tasks are scored using a task-specific rubric, as demonstrated below. 


Generic (year 6 extract)


Task specific based on the above generic extract (year 6 piano performance of Vivaldi’s “Spring”)


Developing rubrics

Generic rubrics are best developed at curriculum planning level when we are deciding what we are teaching and therefore what we expect the students to be able to demonstrate at the end.

The task specific rubrics are developed alongside the learning activities, not the assessment activities. The rubric should help shape the assessment activity so that it is reflective of what has actually been taught. There’s no point trying to measure something you haven’t actually taught because your students won’t do well and you won’t have learned anything about the effectiveness of the teaching & learning process, just that you didn’t teach everything that was assessed. 


Uses of rubrics in the classroom

Students and teachers will use the rubrics in different ways but that’s what makes them so useful - once you have made them they can be applied quite flexibly. 


I did some basic investigation at a previous school where students found it helpful to have the rubric shared with them from lesson 1 (as opposed to further on in the unit, towards the assessment task) so they knew what they were working towards and what was expected of them. As a result, I have now completely replaced LOs with rubrics as I find they provide more detail to the students about what is expected of them.


A question that is perhaps worth investigating further - are rubrics more helpful in practical subjects? I certainly use the task specific ones more for the practical side of lessons than I do for listening & theory. Maybe it’s because there’s so much musical content that differs between extracts that it becomes less useful for listening than it does for practical or theory. I find that ticking to “describes melody, harmony, rhythm etc etc with a high degree of accuracy” seems more workable for me at the moment, although this is something I am keeping an open mind about as I develop my use of rubrics further. 


Collecting and logging data

The final aspect of an assessment rubric, according to Popham (1997) is a scoring strategy. Personally, I don’t go into too much detail with mine and simply assign each of the “steps” (as I call them - I’m trying to avoid “level” as the terminology could then become confused with the old NC Levels) its relevant number value and so score each (performance or composition) task out of 4. If you were applying a rubric to an extended piece of writing you might wish to use grade “bands” instead - similar to what is used at GCSE where each piece of work that falls into each “step”/”band”/”Level” can still fall somewhere on a range of marks, providing the opportunity for much more nuanced assessment. 

For logging data I use iDoceo on iPad - I am about as paper free as a music teacher can be.

I like that I am able to highlight the rubric and share to google classroom (as a pdf) on the go. 

You can also link the task specific rubric to the generic rubric, enabling the teacher to track on a macro and micro level (Obviously, you can also do this on paper but I find that it’s more of a faff). Personally, I would share task specific micro level rubric with students but I wouldn’t share the year overview generic one as it is too vague to be useful to them.


I have previously printed and stuck rubrics into student books, then attacked it with a highlighter when doing the rounds of the classroom.I would then  make a note in planner mark book page using the first initial of the quality level for each evaluative statement - basically exactly as you would if you were entering a grade after marking an essay. I now do this using iDoceo.


I am a big fan of using rubrics as both formative and summative assessment and I feel that dialogue with students is very important when using rubrics. If the rubric, and a related discussion, isn’t able tell the students anything about the work they have done so far and the work they still have to do then, in my opinion, why bother expending the energy to create it in the first place?



References

Dawson, P. (2017b). “Assessment rubrics: towards clearer and more replicable design, research and practice. “Assessment & Evaluation in Higher Education, [online] 42(3), pp.347–360.


Jonsson, A., and G. Svingby. 2007. “The Use of Scoring Rubrics: Reliability, Validity and

Educational Consequences.” Educational Research Review 2 (2): 130–144.


Popham, W.J. (1997). What’s Wrong—and What’s Right—with Rubrics Educational Leadership, 55(2), pp.72–75 


Reddy, Y. M., and H. Andrade. 2010. “A Review of Rubric Use in Higher Education.”

Assessment & Evaluation in Higher Education 35 (4): 435–448.


Tierney, R., and M. Simon. 2004. “What’s Still Wrong with Rubrics: Focusing on the Consis-

tency of Performance Criteria across Scale Levels.” Practical Assessment, Research &

Evaluation 9 (2): 1–10.


Torrance, H. 2007. “Assessment as Learning? How the Use of Explicit Learning Objectives,

Assessment Criteria and Feedback in Post-secondary Education and Training can come to

Dominate Learning.” Assessment in Education: Principles, Policy & Practice 14 (3):

281–294.


Comments