Professional Development

How To Do Effective Peer Review – Some Key Factors

David Godfrey draws on current research evidence to give guidance on how groups of schools can use peer review as part of their improvement strategy

This article considers the practical problems in relation to setting up, conducting and sustaining a successful peer review programme, cluster, network or cycle. In order to be in a position to initiate this, you will need to lead or be involved in the leadership of a school or an organisation that includes or oversees at least a few schools.

The principles and practice of peer review can provide a structure and purpose for collaboration that either builds on a previous collaborative relationship or gives rise to a new one. Given the variety of arrangements in which schools in different phases work, there are a number of issues to consider. The evidence on the use of peer review is a burgeoning area, and more research will certainly help add to this field; however a few working principles can be extrapolated from what is already known. This article takes account of the weight of evidence available and then sets out a series of questions to consider rather than a prescription for a single recipe to suit all tastes. It begins with an examination of the strengths and weaknesses of the evidence base before moving on to a systematic framework for readers to work through various aspects of the peer review process. To help with this there are also a number of professional learning reflection activities to help readers consolidate their understanding of the article and apply it to their own context and potential implementation of peer review. 

The evidence base

The difficulty of measuring impact

As with research in other areas of external evaluation (EE), and school self-evaluation (SSE)i, it is very difficult to establish a direct link between the activities of peer review and gains in student outcomes. There are a variety of reasons for this.

First, schools engage in many activities at whole school level that could impact on student achievement, making it hard to isolate which of these has a causal effect. Even if attempts are made in the research to compare schools that are using peer review to a control group, the gold standard randomised control trial (RCT) is unsuitable for methodological reasons. This is because random allocation to groups (treatment or control) does not work for peer review partnerships that generally allow schools to choose who to work with, ones they trust and wish to collaborate with. It is also difficult to know which outcome to measure; schools may start with one aim to improve numeracy at Key stage 2 but later they may change to look at literacy or science achievement, or one of many non-academic outcomes. This is not to mention the impossibility of continuing with indicator measures that have been abandoned due to a global pandemic!ii.

So, rigorous impact evidence on the effects of peer review is always going to be hard to come by It is also the case that peer reviews are conducted in a number of different ways and these also interact with the external accountability framework. Therefore ‘isolating’ the effect of peer review can overlook the alignment of evaluation and accountability in the system as a whole with all its inherent intended and unintended consequencesiii.

<--- The article continues for users subscribed and signed in. --->

Enjoy unlimited digital access to Teaching Times.
Subscribe for £7 per month to read this and any other article
  • Single user
  • Access to all topics
  • Access to all knowledge banks
  • Access to all articles and blogs
Subscribe for the year for £70 and get 2 months free
  • Single user
  • Access to all topics
  • Access to all knowledge banks
  • Access to all articles and blogs