Teachers could be compared to a piece of rope. We’re hardy and resistant, but stretched thin over long hours. We’re a necessary piece of equipment that supports a lot of people, but we’re rarely given a second thought, unless we fail. But think on the plight of the rope chosen for Tug of War. Two groups of people come along one day, they pick up either end of the rope, and they begin to pull with all their might. The poor rope in the centre has no choice but to bear all of their tension. For the tensions inherent in teaching, see this text; for an analysis on how easy it is to break even a strong rope, see this article.
The tension I want to focus on in this post is between making assessment tools simple (as argued by Davies, 2011) and making them objective (as demanded by many students and stakeholders). To clarify the arguments, a simple assessment tool is easy to read, easy to understand, is clear and concise, brief, and doesn’t have too many words. An objective assessment tool will give the same result, no matter who uses it – the perspective of the assessor does not impact the result of the assessment.
Most people involved in education fall somewhere in the middle of this rope’s span. The most simple assessment tool is a blank piece of paper – the assessor fills in whatever they want. It’s entirely subjective, completely simple, and not very useful. The most objective tool removes the need for the assessor – it could be filled out by a rock, and you’d get the same results. But, nobody wants an assessor that has no opinions, so we need a bit more subjectivity.
You might be asking yourself, have I crossed my ropes? Aren’t there two tensions here, one between simplicity and complexity, the other between objectivity and subjectivity? Although the tension could be set up as such, it’s more meaningful to braid the ropes into one. Allow me to demonstrate:
You start with a blank piece of paper. Beautifully simple, but terribly subjective – the assessor can write whatever they want! So you decide to give the assessor some criteria. They now have categories they need to assess within. Objectivity increases, but simplicity decreases – you have words on your page now.
Up next, you decide, well, we need some kind of way to convert this into grades, otherwise the board will be upset (an entirely different tension). So you throw down a scale (say, one to four) for each category. But, you’ll need criteria to distinguish between the levels of the scale, so you create criteria for each level of each category. Your assessment tool is starting to be more objective, but simplicity is rapidly vanishing. You have a full scale rubric on your hands now.
Thus the tension, as I’ve established it. But let’s take a step back – let’s go back to the blank page with a few categories written on it. If we have a strong assessor, who knows what they’re talking about, would descriptive feedback under those headings be enough? Would it be the most useful formative feedback to receive? We only started losing our simplicity and devaluing our assessor when we brought in the grades. Maybe we can stick to a page with some categories for all assessment. Maybe evaluation can be determined by the students after they’ve received lots of feedback from their assessment – they can decide what’s important and how to assign grades based on their performance. Maybe grades can become authentic.