White paper 2020 FAQs
Who has analysed the anonymous test data?
SchoolDash Ltd were appointed to aggregate and analyse the test data. This work was conducted by Timo Hannay, the founder of SchoolDash.
What do the findings of this study mean for my school and cohort?
We have published this study to provide a view of national trends. This enables schools that have used the Summer Papers to benchmark their own results. Schools that haven’t used the Summer Papers may find it interesting to compare this analysis with their own pupil or teacher assessments on reopening this September. It may also be useful in indicating subject areas and pupil groups that are most likely to benefit from specific attention this autumn and beyond.
Why have Summer Papers been used by schools at the start of the autumn term and how does this affect the test results?
As all schools in England were closed from March to July 2020, many schools were looking to understand how much of the previous year’s curriculum children had learned and retained. The results of the Summer Papers for the previous year provided teachers with an indication of overall attainment but also areas of learning to focus on. These papers are usually taken at the end of the summer term to help support a teacher’s judgement as to whether their pupils are meeting age related expectations. Schools do not usually take the papers as a baseline in the autumn, so we have no national data upon which to compare. This means any change in attainment when comparing against summer 2019 results could be a combination of both school closures and summer learning loss.
Are the cohorts who took the test in 2019 and 2020 of a similar ability?
Yes, by comparing the test results of the cohorts who took the Autumn Papers and Spring Papers in 2018/19 and 2019/20, we can see the mean and standard deviation of the standardised scores were comparable. This leads us to believe that had schools not been closed during 2020, the average Summer Paper results in 2020 (if taken at the same time) would have been similar to 2019. Schools do not usually take the papers as a baseline in the autumn, so we have no national data upon which to compare. This means any change in attainment when comparing against summer 2019 results could be a combination of both school closures and summer learning loss.
Have you seen any difference in attainment between boys and girls?
This was the focus of our 2018 white paper and we have chosen not to analyse results by gender this time. Read more about our previous research.
How will we know if the difference in teaching and learning support that pupils received whilst schools were closed is affecting the results?
It is widely reported that levels of teaching and learning support varied during school closures, but by using a large, nationally representative sample, we believe it is reasonable to assume that any variations in teaching and learning support over that period are balanced out and that the sample as a whole accurately represents the national picture.
Were my school’s PiRA, PUMA or GAPS test results used for this study?
If your school undertook Summer Paper tests in 2019 and/or baseline tested with Summer papers in September/October 2020 and entered your results into MARK, your results will have been anonymised and aggregated to contribute to this study. The total number of test results aggregated in 2019 was 460,000 and 250,000 in 2020. Results were aggregated from 1,700 schools in England across 2019 and 2020.
How have you identified Pupil Premium children?
Schools that use MARK are able to indicate whether or not a child is eligible for the Pupil Premium. This allows schools to conduct their own analyses of the deprivation attainment gap. The Pupil Premium analysis presented here uses a subset of the data, since we have filtered out pupils for whom this information was not entered, as well as schools for which the Pupil Premium data provided was significantly out of alignment with its publicly reported Pupil Premium statistics.
Why are standardised scores for Reception PiRA not included?
Questions on the summer Reception PiRA papers profile the three core skills that underpin early progress in reading including phonics, reading for meaning and comprehension. For children who are unable to access these questions and score below a certain threshold, no standardised score is provided as they may need further intervention with their reading. This means we cannot reliably compare aggregated standardised scores for this paper.
What is the difference between a percentage score and a standardised score?
A percentage score indicates the raw number of marks obtained in a test as a percentage of the total number of marks available. The overall average of these percentage scores typically vary between tests because pupils find some papers to be slightly harder than others. Standardised scores control for potential variation in the relative difficulties of tests sat in different terms or by different year groups by using a common score. This does not have any particular significance for the analysis presented here, which focused on Summer Papers and looks mainly at year-on-year changes for the same tests. Standardised scores are only calculated at subject level, whereas percentage score are used in the analyses of topic areas within subjects.
Now you’ve published this white paper, what is next?
We will be undertaking follow-up analysis in January using Autumn Paper test results. We will seek to identify whether the same trends in differences in attainment persist at the end of the autumn 2020, when we will be able to compare like-for-like with autumn 2019.