James Pembroke is back with another blog, and this time he's talking all about the planned changes to accountability and why this means that standardisation is more important than ever!
For as long as most of us can remember, the progress of pupils in primary schools has been measured from Key Stage 1. Prior to 2016 we had a mixed economy of a levels of progress measure – where making two levels of progress across Key Stage 2 was defined as ‘expected’ – and a value added (VA) measure, in which each pupil’s score at key stage 2 was compared to the national average score of pupils with similar Key Stage 1 prior attainment. This dual approach to measuring progress was confusing because the two measures did not relate to one another. In fact, they were often at odds, and it was entirely feasible for a school to have all pupils make the expected progress of two levels and yet end up with a VA score that was significantly below average. Something had to give.
In 2014, the Department for Education announced its intention to remove national curriculum levels. These were removed for sound reasons - they were best-fit, they encouraged pace through the curriculum at the expense of depth and consolidation, they labelled children, and they told us little or nothing about what pupils could or couldn’t do – but schools understandably felt that the rug had been pulled out from under their feet. And with the removal of levels went the levels of progress measure, which, we were told, would be replaced by a new progress measure. This was a little disingenuous because all that really happened was that value added was retained and was rebranded as ‘progress’. The methodology is pretty much identical. Meet the new boss.
And so, we still have a value-added measure to assess the progress of pupils between Key Stage 1 and Key Stage 2. Pupils are currently placed into one of twenty-four prior attainment groups based on their Key Stage 1 average point score (APS). Those with the lowest KS1 scores (i.e. those with p-scales) are in the lowest groups; those with the highest scores (i.e. those with 2a and level 3) are in the highest groups. Each pupil’s Key Stage 2 score is then compared to the average Key Stage 2 score for their group, and the difference is the pupil’s progress score. The school’s progress score is the average of these differences. It shows by how much, on average, pupils exceed or fall short of the national average attainment of similar children nationally.
Most senior leaders are now familiar with this method but there are issues, and the DfE is looking to address these. Change is on the horizon.
One of the issues is that Key Stage 1 data is not very reliable. Sublevels – on which the prior attainment groups are based – are broad, best-fit and poorly defined. There is quite a high level of subjectivity in the system. The next problem is that we are running out of levels and we are still in the dark as to how progress will be measured once they disappear from the system in 2020. Another major issue is that progress cannot be predicted because pupils are compared to national average scores for similar pupils nationally in the same year. This means we never know the benchmarks until pupils leave the school. To ease schools’ anxiety on this front Ofsted have stated that it ‘does not expect any prediction by schools of a progress score, as they are aware that this information will not be possible to produce due to the way progress measures at both KS2 and KS4 are calculated.’ This is very welcome, but it does not stop schools from wanting to have some indication of the progress their pupils are making.
The final issue is that progress is still only measured for four out of seven years of statutory primary education, which has resulted in all sorts of bodged methods for measuring progress across Key Stage 1. These usually involve crude assumptions of Key Stage 1 attainment being made on the basis of outcomes in specific early learning goals at the foundation stage. Something that looks remarkably similar to the old ‘expected’ progress measure, and currently a feature of Ofsted’s inspection data summary report.
To address this issue, the DfE are planning to implement a standardised reception baseline in autumn 2020, to assess pupils’ ability on entry to reception classes that year and will form the basis of future progress measures. This – it is hoped – will provide us with a more simple, reliable and accurate measure of a cohort’s progress across the seven years of the primary phase than the current disjointed system. But the issue is that it’s a long time-span with no interim national assessment points to evaluate standards, meaning schools will be in the dark, and increasingly reliant on non-standardised tracking as an indicator of attainment and progress.
This is why standardised assessment in English and Maths is an attractive option for many schools. It allows teachers to benchmark pupils' attainment against a large, representative national sample, and monitor their progress year-on-year, or even term-on-term. Rather than expecting scores to go up at each assessment point, we are simply looking to see if pupils are maintaining their position nationally and are therefore keeping pace and making good progress. In addition to monitoring standards – and perhaps more importantly – such assessments provide teachers with a rich source of item level analysis, which can reveal those critical gaps in learning that need to be addressed, both at individual and cohort level. And finally, standardised assessments will help MATs monitor standards of schools across a trust and target resources accordingly.
Standardised assessment is already an important assessment tool for many schools, but in future, with statutory assessment points spaced seven years apart, they are likely to become even more vital.
Tags
Accountability,
Assessment,
english,
James Pembroke,
key stage 1,
ks1,
ks2,
levels,
mathematics,
maths,
Primary,
Sig+,
SigPlus,
Standardisation,
standardised tests