Assessment Policy

Are Teacher-Assessed Exam Grades Accurate Enough ?

Howard Sharron looks at the controversy over teacher assessed grades and the likelihood of bias and inconsistencies.

Among the many bad decisions this government has made about Covid and Education – sending all children back in a repeat big-bang style return, similar to earlier failed lockdown ends, the failure to vaccinate teachers as a priority group and the famous algorithm to misjudge probable exam results, there is at least one lesson that has been learned.

Teachers will now be ‘trusted’ to provide grade assessments in lieu of exams, which have rightly been abandoned for this year. The ‘trust’ is qualified  because it is force majeure, made necessary by there being no other safe political option.

The reluctance of the government to do this earlier has, to be fair, some validity in the research which shows teacher assessed grades tend to be inaccurate and generally over-estimate a child’s performance in exams. But under-estimated predictive assessments also occur and one can speculate this corresponds to the low expectations teachers have of certain groups, ethnic minorities and white working class boys to name but two.

It’s hard enough for trained examiners to be fair and accurate in their assessments, especially in subjects like English or Politics where the examiner has to employ some level of subjectivity in the marking. So it’s not surprising that teachers find it hard. Any professional, in fact, caught up in the day-to-day detail and effort of fulfilling goals would find it hard to step back and be objective. It’s why consultants – an external pair of eyes – have such a valuable role in seeing through the thickets.

Teachers are also human and have their likes and dislikes and their unconscious biases and these will play a role in grade assessments – despite all professional attempts to extinguish them. And they only need to be present in a small degree to have a profoundly negative impact on students’ lives.

It’s interesting to look back and see how teacher grades could have affected our lives in the past. In the case of this writer, it would have been profoundly damaging. I was widely expected to fail O-Levels and fall victim to the annual 5th Form Cull of rom a very unpleasant Grammar School. The Headteacher even told me he was looking forward to saying goodbye! In the event, I did very well, much to the surprise and discomfiture of many – thankfully not all – of the staff.

Mostly the bias will be the other way because teachers generally want their children to do well. But the negative biases will still be there and profound injustices will take place. And even positive inaccuracy can have negative outcomes when they lead to the wrong choices of courses and universities.

So the issue becomes whether these inaccuracies in teacher assessment can be mitigated. One way would be to take much more seriously internal validations of teacher assessment, peer to peer or by the SLT.  One suggestion is to cross-check anonymized assessed grades against the pattern of markers contained in progress tracking software. But these systems, although they introduce some objectivity into progress tracking, are not totally free from bias or inaccuracies.

There needs to be a school culture where poor assessment in either direction is looked for, and frowned upon.  This is important not just for exams but for normal assessment of work during the year. Effective feedback is an important element of the learning process and schools need to check for inflation assessment for everybody’s sake. At the moment there is a lot of pressure on teachers -even with progress tracking software –  to always show positive trends, and if a few inaccuracies creep in, well, nobody is going to mind very much.

A second form of validation could be external, by partner schools. A sort of passing the books to the person sitting next to you, but on a school scale. It could be one of the key roles of MATs to do this, and even for schools not within MATs it can’t be beyond the level of ingenuity to find a school with some connections to moderate your assessments.

This sort of collaboration is time-heavy but the process to systematise this type of moderation could yield very positive results. It could lead to teaching and learning exchanges where schools are seeing differing levels of performance emerging and this visibility at a teacher level – not at a statistical one – could be a powerful driver for change and school improvement.

At University or College level, admissions officers of courses could adopt a more relaxed view of assessed grades and accept there will be margins of error and that the rigours of the course itself will have a sorting effect. Or there could be more use of course interviews and/or entrance tests which could be much more indicative of a student’s likelihood of succeeding at this particular subject than exams. Indeed one of the findings in the US is that High School graduation tests in no way prepared students for university studies in their chosen courses.

This has led to the major reform effort in the US of the Common Core Curriculum which is trying to find new assessments for problem-solving, knowledge-management, creativity, collaboration and inquiry skills.

All of which begs the question as to whether we are all getting a little obsessed about predicting accurate assessments for exams which won’t necessarily accurately predict success at university or college or indeed life.  This pandemic should trigger to search for new assessments and new forms of study that will consign our current ‘gold-standards of A- levels and GCSEs to history. We got very near to this under the last Labour government when a module and point based cumulative system of endeavour was very much in favour, only to fall victim at the last minute to a Blair veto. Maybe it’s now time to dust some of these proposals off?

Howard Sharron Is The Publisher of TeachingTimes

Register for free

No Credit Card required

  • Register for free
  • Free TeachingTimes Report every month

Comments