Step away from the data

I am a governor of a secondary school which is in the process of becoming “good”. To be clear, I mean here the “good” that is measured and measurable by Ofsted.  I actually think the school is already good: the staff are hard-working and committed, the pupils are motivated and well-behaved, and everyone who is part of the school community appears to buy into its aims and ethos.  While I can see it’s not (yet) outstanding, what else you could want from a school to make it good?

In this school’s push to become “good” (as measured by Ofsted), it has taken up an approach to data collection and analysis which is utterly meticulous. It is this approach to data which I want to explore here, and I am using it as a case study for all schools in England who are working in the current system of accountability and self-improvement.  Each child in this school, from year 7 onwards, will be tested three or four times per year in each subject, and each test will be analysed on a question-by-question, pupil-by-pupil basis.  Teachers will see, for every child in every class, which topics they have struggled with in that test, and then they will put in place interventions to improve their attainment in that topic.

At first glance this seems like a strategy which is perfectly designed to lead to improvements in attainment, but, as I think about it more, it raises questions. Firstly, what is the quality of the data being gathered?  Can we draw any meaningful conclusions from one group of children, sitting one particular test, being unable to express their understanding of, for example, the concept of a pure substance?  If we then put in place a further lesson on pure substances, and the children improve their responses to the test question, are we improving their understanding, or are we simply ensuring that they are better trained in answering the question correctly?

Secondly, what does it say to our pupils if we break down each subject, and each topic within each subject, into tiny crumbs like this? We often complain that students struggle to see the big picture, and to apply their knowledge to new contexts, and it seems to me that, with this type of testing and intervention regime, we are in danger of exacerbating this problem.

Finally, what are we saying to our teachers when we ask them to micro-analyse their outcomes in this way? One inference is that we don’t trust them to make their own assessments of the quality of their teaching, instead asking them to rely on data from written tests.  I accept that this is not the only way that teachers will learn about what is going on in their classrooms, but it is the one which is likely to take up the most time, and I think it is probable that it limits the possibility of creative, innovative, interesting classroom practice.

Of course, it is infuriatingly difficult to argue with the aim of all this analysis, which is to generate improved outcomes for children; a rationale with a moral authority which silences all debate. And while clearly I don’t want to suggest that pupil outcomes are unimportant, I think that we need to remind ourselves that behind each snippet of data lies an actual child who probably sometimes just gets the answer wrong on the test, even though they understand the concept perfectly well and could explain it in conversation or on another day.  And we need to remember that behind each test lies a teacher who is hoping to change their students’ lives by making their learning enjoyable, challenging and worthwhile, and that the way they want to achieve this is unlikely to be through written testing.

This type of this data-driven approach is happening in many schools, pushed, perhaps, by data-obsession at a national level (for a quick snapshot of the kind of data, look at the Ofsted data dashboard ( or the Department for Education’s performance tables ( It is ironic that this is happening at a time when the government is looking to reduce teacher workload ( It is also a significant contributor to pupil stress levels around examination performance.

Instead of generating all this numerical data, we could be investing our time and effort (and money) in professional development. This is currently high on the political agenda (, so we need to seize the opportunity to create a system which values in teachers, the same qualities we work hard to develop in our pupils: independent thinking, self-reflection, criticality and innovation. We need to allow teachers room to trial new classroom practices and then trust them to make judgements about their quality.  After all, one slightly unfocused lesson, in which a teacher trials a potentially exciting new approach to a topic, will not ultimately affect a students’ examination performance.  In this way, supported by headteachers and governing bodies, we will have a workforce of teachers who are able to cultivate, understand and identify pupil learning, without resorting to crunching numbers from a test, and a school system driven by innovative improvement in the classroom.

We should be supporting our teachers to feel empowered in their teaching and happy in their classrooms, and to do this we need to allow them to step away from the data.

Emily Perry leads the Centre for Science Education, part of the Sheffield Institute of Education






2 responses to “Step away from the data”

  1. John Wardle Avatar
    John Wardle

    Perhaps it is it the easy option to judge the outcomes of a school system on quantifiable and measurable items? We now have classrooms where the overarching language is that of achieving in the test. The real challenge is to design a high performing system which educates rather than trains. Engagement, excitement, inspiration are much more challenging to develop, assess and analyse than discrete improvements in arbitrary test questions. Professional development, as Emily indicates, has a role to play but it too must challenge teachers to reflect and develop understanding of learning affected by their teaching. CPD which allows teachers to challenge the rhetoric, examine the evidence and be empowered to be creative in their approaches. Over reliance on the measurable ‘data’ can create a smokescreen which confuses attainment in the defined measures with an educated, intrinsically motivated and inspired learner (or teacher!).

  2. Dave Jones Avatar
    Dave Jones

    The assumption here is that better exam results equate to better outcomes for children. This is so prevalent across the whole of the school system that it now, literally, “goes without saying.” Exam results are only a proxy for something much more complex, they reflect effort, application, motivation, organisation and memory as well as understanding. They are also quite a poor proxy in many ways and when exam performance is maximised many other, long term, important skills are degraded, including, often the cognitive development of the child. Emily is right to question the use of data in this way and the opportunity cost that comes with it, both for teachers and students.

Leave a Reply

Your email address will not be published. Required fields are marked *