Replacing National Curriculum levels

The original system of National Curriculum levels was based upon children’s cognitive development as they grow older. The levels began with descriptors covering low-level demand and these developed broadly in line with Bloom’s taxonomy up through to levels 7 and 8.  There was a general understanding within education of where on the ladder children were most likely to be at different ages.

In principle, the approach is educationally sound as a formative tool, but in practice it was also used as an accountability measure and this caused it to develop two unassailable operational flaws — flaws that have, 20 or more years after the system’s inception, finally killed it off.

The first operational flaw surrounds the application of the descriptors.  Their appropriate use is to assess the performance of a pupil on a particular task.  But given that every task has different demands (in a myriad different ways) it would be wrong to assume that assessing the same child on the same day but on a different task would result in the same level.  So a teacher’s data on a child will span a range of levels and yet we insist that they report only a single level when held to account (to parents, to the school, even to the child).  Levels are not percentages in tests, they cannot be averaged, and when we do collapse the range to a single “representative” value we give a poor indication of progress: we ignore the wonderful pieces of work, cancelled out as they are by the less impressive (but unrelated) pieces of work from a different day.

Which leads us to the second operational flaw: the need to demonstrate progress.  It is probably broadly true to say that children, on average, make two levels of progress over a key stage.  But that’s it.  Beyond that, we must remember that some will progress more and some less (and often this goes with ability) and that the rate of progress is very unlikely to be linear.  The need to record definite progress at closely spaced intervals (a half term, a term, even an entire year) has led to the confusion of “sublevels” and the nonsense of “two sublevels of progress per year”.  This grew (reasonably?) from teachers’ desire to be able to say things like “well they occasionally did some level 5 work alongside lots of level 4 work at the start of the year, but now they consistently produce level 5 work with the occasional piece of level 6 work”, but when reduced to “they’ve moved from a 5c to a 5a”, the educational meaning is not only lost, but perverted.

I will now argue for an alternative that attempts to retain the original focus on cognitive demand (which I think is correct), but is free of the complicated smoke screen of sublevels.  It is not profound, just a change of emphasis.

I would argue that we should report a child’s ability in a subject relative to age-specific criteria (rather than all-encompassing ones that the child progresses through as they grow older), and that this be done reasonably bluntly so as not to give anyone (parents, government, even the teachers themselves) the impression that a finer simplistic representation is even possible.  This could work by reporting each child as either “foundational”, “secure”, “established” or “exceptional” in each subject at the point in time that the judgement is being made.  This judgement would be made as described in the next paragraph, but, crucially, would not be a progress measure in the way levels were: there would be no expectation for a child to progress from “foundational” to “secure” by the next reporting window, because the next reporting window would have different criteria for each category.  The measure would be a constant snapshot of current performance, more akin to GCSE grades than National Curriculum levels in that regard, but based upon the same underlying cognitive basis as levels originally were intended to be, rather than a percentage mark in a test.

So, how would a teacher “categorise” pupils into one of the four divisions?  The drivers here would be what is useful.  One important use would be to answer a parent’s most basic question: “how is my child doing in your subject?”.  I would argue that, accepting for the necessary tolerances of not knowing what the future holds for individuals, “foundational” pupils at KS3 should most likely secure grades 1, 2 or 3 (ie D to G) in their GCSEs at the end of Year 11.  “Secure” pupils should most likely go on to gets grades 4 or 5 (C to lower B); “established” grades 6 or 7 (higher B to lower A), and “exceptional” 8 or 9 (higher A to A*).  A school could use this guide to generate approximate percentages to loosely check teachers’ interpretations: for instance, perhaps 10% exceptional, 25% established, 50% secure, 15% foundational.  In this way, a child and their parents build up, over time, an understanding of the child’s strengths and weaknesses in a very transparent manner — certainly more transparently than levels allowed.

That is a retrospective view point of course, and could only ever be a loose guide, anyway.  In reality, that guide would need to inform a school’s (and each department’s) approach to how to differentiate schemes of learning and individual lesson objectives and tasks so as to create appropriate challenge.  For instance, in an All-Most-Some model, the lowest demand objectives would be aimed towards supporting the “foundational” pupils, whereas the slightly more demanding objectives would support the “secure” pupils.  If the objectives are written correctly, a pupil’s ability to access them would reveal which “category” they are mostly operating within.  Schemes of work would thus be written to match a level of cognitive of demand whose differentiation is decided in advance (Bloom, SOLO, etc), perhaps hanging together on the key assessment opportunities that will allow the teacher to make a judgement (as York Science promotes).  Those judgements would formatively allow the pupil to know how to progress and also allow the teacher to anchor their judgement in something concrete.

This is not National Curriculum levels using different words.  It takes the best of the levels (criteria based upon cognitive demand), but dispenses with both “expected progress” and with it a “fine” representation of a pupil’s ability.  The fine detail will still be there — but in its entirety in the teacher’s markbook, where it can be usefully used in a formative manner.  And pupils will still progress — but against criteria laid out topic-by-topic in carefully crafted schemes of learning and lesson activities, rather than against a generic set of descriptors designed to span more than a decade of learning across a dozen disparate subjects.

Teachers know how to assess and they know how to move individuals forward — and they know that it is all about details.  Learning (and its assessment) needs to be matched to each individual objective and “overall” progress is a hazy notion that cannot be captured accurately or usefully in a snapshot, let alone in a single digit.

Teachers should be encouraged (and allowed) to commit to crafting superb activities (and assessment criteria) to move pupils through the cognitive demands that are inherent to the concepts in front of them at each moment in time.  And when they are then asked to report a child’s achievements in a subject, they should be allowed to give “woolly” snapshots (more an indication of future GCSE performance than anything else, so that pupils and parents can tell strengths from weaknesses), with the detail being conveyed in the feedback in an exercise book, the conversations of a parents’ evening or the written targets of the annual report.  How a subject department turns their internal data into a whole-school categorisation would be up to them, monitored and tweaked retrospectively by how accurate an indicator it turns out to be.  But it would also be the key driver for ensuring learning objectives are pitched at the correct level of demand in every lesson, for every child, which is, I think, true to the spirit of the original National Curriculum levels.

%d bloggers like this: