Thank you to Michael Tidd for this insightful article.
I have something of a mantra in teaching: Do Less, But Better. I try to do less marking, but make it of higher quality; I spend less time on planning pro formas but plan better sequences of lessons; often I ask children to write less in their books, but make each sentence better.
Testing is no exception: less testing, but better. The slight twist here is that less testing, for me, means more tests. It sounds paradoxical at first, but it’s an important differentiation. Testing can be long-winded and onerous, while achieving little. I can’t be alone in having spent hours marking multiple test papers only to come up with a sub-level that I could have guessed for myself. The old ways of testing were too driven by numbers.
Now I probably use tests more than ever, but the key is in selecting tests purposefully and keeping them as tight as possible. No longer do we run through past papers in full depth. Instead, I choose the right questions to match the key points I’ve been teaching, and use as short a test as possible to achieve the understanding I need. We know, too, from wide-ranging research, that the ‘testing effect’ can actually help children to secure the learning they’ve undertaken as the retrieval of that knowledge or skill can help to cement the understanding.
The point of most tests should be to check what children can and can’t do. I use the Rising Stars Progress Tests for this. Short assessments more regularly, allow me to pick up on areas that need revisiting, or individuals who need extra support, without demanding hours of marking; often children can be involved in the marking themselves allowing them to make some assessment of their own strengths and needs. They can be used wholly formatively.
But there is a reality about testing that we cannot avoid. One of the uses of a summative test is to be able to compare large groups, identify gaps, ensure that disadvantaged pupils are not left behind, etc. It makes sense, as we approach the end of the academic year, to look at each cohort and identify how they have progressed more generally. For this we will use the Rising Stars Optional Tests.
Inevitably, any test can only be a snapshot of attainment, but it’s a useful one – particularly when looking across a whole school. It’s useful for school leaders to be able to have a straightforward comparison of attainment across cohorts. It also allows us to see an indication of progress individuals have made. The story that accompanies each child’s result is key, but the results of a common test can provide a useful starting point for those discussions.
Using the Optional Tests at the end of the year allows us to spot patterns: are disadvantaged pupils achieving as well as their peers? Are boys and girls progressing equally? Have any pupils made notably more or less progress than might be expected over the course of the year? Are any areas of our curriculum not yet being secured?
The ethos of Do Less, But Better, fits perfectly well with testing. Our children are not spending as long in test conditions as they might have done in the past; but when we use tests, it is with purpose and effect – and that has to be a better way.
, formative assessment
, key stage 1
, key stage 2
, summative assessment