A 20-year history of primary accountability
SATs are such an entrenched part of our school life that we perhaps need reminding of how they emerged and their implications for the future. Bill Boyle explores the wider context.
When a National Curriculum was first introduced to schools in England through the 1988 Education Reform Act the intention was that it ‘should be a balanced and broadly based curriculum’.
Sixteen years later, after the most intense ‘national testing offensive’ that primary schools have ever been subjected to, the Government reiterated those sentiments by stating it would ‘make sure that every subject is taught well in primary schools and that every child gets the benefit of a rich, well-designed and broad curriculum’.
Now, we are told in what has become the traditional style of government ‘leak’ to the media, that primary school children will study fewer subjects to concentrate on ‘the basics’.
There is a ring of déjà vu to this. Data, which my research centre at the University of Manchester has been collecting from a large representative national sample of primary schools since 1997, indicate that for all of that 10 year period all subjects except for English and mathematics at both Key Stages 1 (ages 5-7) and 2 (ages 7-11) have suffered reductions in their teaching time (Boyle & Bragg, 2006).
Those reductions have been solely to make more time for the ‘teaching’ of English and mathematics which are coincidentally the only two subjects which the Government measures for its Standards agenda league tables. During those 10 years English and mathematics have been taught for over 50% of the available teaching time thereby reducing the other nine statutory subjects to an undignified partitioning of the remaining 50% of teaching time between them. Our subject teaching time data reveal that history, geography, art and music have all but disappeared as discrete taught entities so it will be easy to ‘roll them into one’(as the ‘leaked report so graphically describes what used to be called cross-curricular teaching).
Another interesting side-effect of this ‘two tier curriculum’ (Ofsted, 2007) has been the regularly lamented ‘shortage of science graduates’ from the same government department which has witnessed the reduction of primary science teaching time by more than any of the other foundation subjects (Boyle & Bragg, 2005). What was the old adage about ‘starting them young’? Sometimes you wonder if anyone in the Government’s education department is capable of joining up dots any more.
Testing when ready
The same ‘leak’ referred to above now promises ‘testing when ready’ as the latest panacea, but the same error of conceptualisation applies. This ‘radical’ new world is still to operate within a national ‘Standards agenda’ in which the only standards that are measured are test performances of children on testable sub-domains of two subjects. In other words those parts of the English and mathematics curriculum on which children can be tested by the use of paper and pencil tests (I won’t debase the word assessment by using it in this context, as assessment is a support to a child’s learning through providing specific learning information to ‘move the child on through the next learning steps’ not a binary judgement of simplistic failure or success).
So, for example, how can an assessment of a child’s English progress be attempted without assessing his/her speaking and listening skills? Well in a government Standards agenda it can, by ignoring those ‘hard to test’ aspects of a child’s development. If education ‘standards’ (and the term has become so debased as to be meaningless) are to be measured and publicly reported (and the government self acclaimed for the quality of its interventions – of which there have been many!) shouldn’t they be ‘measured’ across a broader front than two subjects?
Our monitoring data shows that teachers spend large amounts of the ‘teaching time’ which is accrued by English and mathematics in various strategies of ‘test preparation’ as they know that in the current political climate it is in their school’s accountability interests to spend time producing ‘test-wise’ pupils who will ‘perform’ and achieve the Government’s percentage target levels. How does that relate to a school’s historical definition and its primary role in developing teaching and learning and offering each child the opportunities to experience a ‘broad and balanced’ curriculum?
To further promote the feeling of ‘déjà vu’, the notion of ‘testing when ready’ is not a new one in England. In fact it’s almost precisely 20 years since it was approached. In the autumn of 1988, the Government’s newly instituted organisation with responsibility for the assessment of the recently legislated National Curriculum, the Schools Examinations and Assessment Council (SEAC), the predecessor of today’s QCA, issued a specification for the development of Standard Assessment Tasks (the SATs as they soon became known) for seven-year-olds.
My research centre, the Centre for Formative Assessment Studies (CFAS), put in a successful bid for developing the assessment materials based on the assumption that the intention of the process was ‘seeking and interpreting evidence for use by learners and their teachers to decide where the learners are in their learning, where they need to go and how best to get there’ (ARG, 2002). We assumed (from our experience of teaching, learning and assessment and a certain vagueness in the Government’s specification) that these evidences would be collected through themed units of teaching materials with integrated assessment which supported as well as measured children’s learning and which the teacher could administer individually or in small groups when ‘the time was judged right’.
The SEAC funded three development groups, one of which was CFAS (the other two were NFER and CATS, a group based round the University of London). The CFAS conceptualisation of an assessment task was based on the thinking that the materials should support teachers’ understandings of pupil learning and misconceptions at point of use and if teachers were going to make decisions on entry levels and administer the ‘assessment materials’ then teachers primarily should devise and write those materials with support from HE academic educators.
So five LEAs (Bradford, Kent, Lancashire and Salford from England and Clwyd from Wales) seconded teachers to provide classroom experience and ‘task’ writing expertise to materials development workshops and subsequently to mount classroom based trials of the materials; the Assessment Research Centres at the Schools of Education of both the Universities of Manchester (CFAS) and Liverpool (CRIPSAT) assisted the development and writing of the materials so that they had assessment validity, reliability and rigour. The CFAS Standard Assessment Task which was piloted across a national sample of seven-year-olds in 1990 represented a holistic approach to assessment while demanding explicit demonstration and enabling analysis of achievement.
The unit of assessment was the statement of attainment (SoA) and each SoA was embedded in its own purpose built task. Unlike the single level test notion of ‘test windows’ in our assessment materials in1989 each task contained a ‘confirmatory’ phase and an ‘exploratory’ phase. In the confirmatory phase each task was associated within a general cross-curricular class-based activity. This set of materials was to be used to confirm teacher assessment (TA) so that pupils only attempted tasks which according to their TA judgements, they should attain. The exploratory phase was packaged as progressively more difficult ‘task strings’ each covering all the SoA in an attainment target (AT). These were intended to be used to ‘stretch’ children or to put a lower boundary on their attainments. For ease of reference and access they were packaged by AT and subject. They allowed teachers to take individual or groups of children through a number of related tasks at an appropriate level. The decision to use the exploratory material with a child depended on whether or not that individual child had attained the ‘teacher-expected’ level in the confirmatory phase. Teachers had a task menu which was tailored to their TA and enabled them to plan their assessments and manage their time (“do I need to observe this assessment or is there a tangible product/outcome which allows me to assess later?”).
Through our research into classroom based assessment we recognised the tensions inherent in a national ‘test as assessment’ programme development – breadth of curriculum coverage versus the restrictions of test item writing – but within the constraints of the specification as issued by the SEAC on behalf of the Government, we felt that the assessments we had devised were embedded in current classroom activity and empowered both teacher and child through being matched to their current estimations of ability. In 1990 it was impossible to predict the wholesale enculturation of ‘teaching to the test’ and the resultant skewing of the primary curriculum which directly resulted from the government policies of league tables and the standards agenda.
The arrival of SATs
The Department for Education and Science’s (DES) view of national assessment swiftly emerged as rather different from the CFAS concept and after consuming vast amounts of public money (£3.8 million was Manchester’s share of the development pot gifted to the three contracted agencies) on large scale materials production and trialling necessitating the destruction of several forests, the NFER was awarded the contract for the ‘live’ paper and pencil tests. These soon became known as the ‘SATs’ to which every seven year old cohort from 1991 onwards has since been subjected in the first week of the month of May.
As a postscript, in late 2007 I concluded a survey of a sample of 465 primary schools on the impact of national testing on the curriculum. More than four out of five schools reported the use of practice tests (Year 3 classes 86%; Year 6 82%) during key stage 2. Test preparation in Year 6 in the second half of the spring term accounted for three or more hours per week in two thirds of schools (66%) and two hours per week in just under a quarter of the sample (22%).Over three quarters of the schools (77%) indicated that the amount of time devoted to test preparation had increased over the past ten years. Three out of five schools (62%) reported that their teacher assessments were more accurate than the test results of their pupils and 27% felt that TA was as accurate as testing. A huge majority (93%) reported that if national testing was to be reduced in status and largely replaced by teacher assessment, then much less time would be spent by teachers on test preparation rather than teaching; 91% felt that pupil stress would be reduced and 87% felt that there would be reduction in teacher stress. A large majority of the sample (89%) also reported that national testing had narrowed the curriculum (40% by a lot; 49% a little).
A disappointing outcome
The National Curriculum end of key stage tests – in contradiction of the intention of the Task Group on Assessment and Testing (DES,1988b), which devised the system of progressive performance levels – soon became ‘high stakes’ in England when aggregated results were used to set targets which schools are held accountable for meeting (DfES, as was, established cohort ‘percentage success’ at fixed level targets for English and mathematics at KS2) and to form the basis of ‘league table’ style performance indicators (Harlen & Deakin Crick, 2003). This device of control has never been far beneath the surface of this or subsequent National Curriculum legislation, a legislation which was intended to increase England’s status in the arena of international competitiveness. In 1989 the national assessment development had seemed to promise an opportunity to develop some meaningful and valid (in teacher, face validity terms and in academic, construct validity and reliability terms) materials which would support teaching and learning and also locate the first steps on the ladder of progression for young children.
Instead of which we got the imposed sterility of the ‘which bits of the (mathematics and English) curriculum can I write a paper and pencil test for’ and the beginning of the road which has led us to primary classrooms being ‘tested to the limit’.
Bill Boyle has been the director at the Centre for Formative Assessment Studies (CFAS), School of Education since 1997. Previously he was a primary teacher and LEA adviser. Much of his work involves working with teachers in exploring ways of linking teaching, learning and assessment through involving pupils actively in their own learning and making that learning collaborative rather than competitive.
Taken from Managing Schools Today
- wigl – what is good leadership?
- wigt – what is good teaching?
- sandwell early numeracy test
- project-based learning resources
- creative teaching and learning
- school leadership and management
- every child
- professional development today
- learning spaces
- vulnerable children
- e-learning update
- leadership briefing
- manager's briefcase
- school business