Add your timetable to Google Calendar

Timetable on Google Calendar A couple of years ago, I switched from a paper diary to Google Calendar and have never looked back. This year, I went one step better by adding my timetable to my calendar as well. It would take a while to add over 1000 events to the calendar one at a time, though, so I wrote a quick Google Sheet/Script to automate it: https://drive.google.com/open?id=1I9In6vG2B0C-jVHePxzbko8hKFujh60HsIEgqVLZMKA Just edit the sheets to match your own timetable and school day timings hit the “add my timetable” button. Takes no more than 10 minutes. Enjoy!

Advertisements

Kinematics resources

A collection of my favourite resources when teaching kinematics.

Here is a collection of my favourite resources when teaching kinematics.  If you use different ones, please add them in the Comments at the bottom.

Distance- and Velocity- Time graphs:

The Universe and More Graph Game

and

PhET simulation (The Moving Man)

(PhET has dozens of high quality applets, if you’ve not seen them before.)

Vector addition of velocities:

Mythbusters clip (via YouTube)

Deriving the Equations of Motion for Uniform Acceleration:

My own worked derivation (via YouTube)

Introducing projectiles:

The Mega Whoosh (beware!) (via YouTube)

Simple practice with projectiles:

PhET simulation (Projectile Motion)

Independence of vertical and horizontal motion:

Mythbusters clip (via YouTube)

The Monkey and the Hunter:

Alom Shaha’s Monkey and Hunter videos (National STEM Centre video)
(Teacher instructions version and student version)

and

Stage show clip from University of Minnesota
Also available on YouTube here.

(Both have parent pages, full of other videos, incidentally:
National STEM Centre Physics Demonstration Films and
University of Minnesota Physics Outreach Programme )

Worked examination questions:

DrPhysicsA Worked example questions (via YouTube)

Eclipse resources

Here are two PowerPoint presentations I’ve put together for teachers at my school to use with their classes prior to Friday’s partial eclipse (20th March).  Feel free to use/modify them for use with your own classes it they’re any use to you.

This one is basic, aimed at tutors getting their tutor groups in the mood:

Eclipse for tutors

This one is aimed more at science teachers able to talk their classes through the slides (see slide notes).  The underlying original images are taken from Google searches or this PPT that John Hudson has put on TES Resources.  Sorry, that’s the best I can do by way of acknowledgement!

Eclipse for science teachers

Fingers crossed for clear skies!

Why it is right to remove assessed practical work from GCSE sciences

Why I applaud Ofqual for standing their ground over internal assessment of practical skills in GCSE and A-level Sciences

When Ofqual suggested in December that the new GCSE science qualifications should not contain any assessed practical work, I couldn’t believe it.  As a science teacher, this was music to my ears and it seemed too good to be true.

Against this rare appearance of optimism when considering edicts about curriculum matters, I later felt anger and confusion when both the Education Secretary and national science organisations (reported here) objected to the decision.  How could they disagree?

Having reflected, I can now see that we are coming at the decision from opposite directions.  As a teacher, to me “no assessed practical work” translates to “no internal assessment”, whereas perhaps to other stake-holders it perhaps sounds a lot like “no longer any need for any practical work in science education”.  The key difference that needs to be appreciated, however, is that internal assessment never has and never can work and it is vital that it is removed completely, once and for all…but this does not necessarily lead to practical-free science lessons.

In my experience, it is very rare for an officially-produced document to contain so much good sense, but when I read Ofqual’s consultation I agreed with nearly all aspects it, with the bottom line being, for me, that internal assessment is fundamentally incompatible with an accountability framework.  You can’t ask teachers to impartially conduct assessments with their students when it affects both their own pay progression and also the whole school’s standing in national league tables.  This is not to suggest that the majority of teachers break any rules, just that they are incentivised into devoting an unhealthy amount of time to playing the system to the maximum to give their students the very best chance.  This isn’t practical work, it is hoop-jumping.  It is further compounded by incredible systemic flaws, eg assuming equivalence between different controlled assessments; 50 raw marks being equivalent to 100UMS (ie just 10 raw marks, internally assessed, are worth half an entire GCSE grade overall); poor marking guidance allowing an inordinate amount of subjectivity; controlled assessments being common across different GCSEs (e.g. Physics and Additional Science), with the grade boundaries a compromise for both…  (All of that based on AQA’s GCSE controlled assessments, which is what I teach.)

Even pre-2006, when we assessed practical work via essays using “POAE”, it was still hoop-jumping:  you gave students tick-lists and made them keep rewriting it until they met the criteria.  Now we run intensive off-timetable days in which we knock out the assessed components, interspersed with intensive prep lessons moments before each component.  It’s nonsense how the assessment has been allowed to drive the teaching.  “Teaching”, not “Teaching and Learning”, incidentally — there is no learning associated with controlled assessments.

To non-teachers, I would emphasise one more important thing:  it is only through delivering these internal assessments, day in day out, class after class, year after year, that you can truly understand the inappropriateness of any hypothetical system of internal assessment.  It is fundamentally opposed to both the prioritising of learning and the use of the outcomes for accountability.  The situation has always been better at A-level, because cohort numbers are smaller and the range of ability is smaller and higher, but at GCSE it is unsalvageable.

So, I am very happy indeed to lose internal assessment.  The danger, of course, is that the systemic incentivisation to devote curriculum time to only that which improves grades is still there and without practical work being a benefit to grades there is the very real possibility of schools abandoning it.  This is especially true when placed alongside shrinking budgets and the expense of laboratories, equipment, consumables and technicians.  But this isn’t what Ofqual has suggested either:  there is an explicit statement that conducting practical work will give students better access to 15% of the marks on the written examinations.

This is a compromise, of course, and we are yet to see either the details (what practicals will be chosen?) or the application of the Law of Unintended Consequences.  Perhaps the practicals will not be ideal?  Perhaps schools will simply demonstrate instead of allowing full-class practicals?  Perhaps exam fees will go up, squeezing budgets further.  Perhaps the exam questions will be shockingly poor (we’re certainly seen enough of that over the years).  Time will tell, but I cannot see how any future situation could possibly be as flawed as the current situation (famous last words!) and so I am very happy to try a different approach.

So I am delighted to see Ofqual standing their ground, despite so much criticism from such established bodies.  As a pracising teacher, I will ensure that practical work remains at the heart of what I do, but I and my students will no longer be shackled by such flawed assessment principles that have so negatively distorted the curriculum and the rigor of its assessment for the last 20 years.

Excel UMS function

Over the past decade, my departmental recording spreadsheets that track student data have evolved into pretty sophisticated affairs, making full use of Excel’s functionality to make my life easier.

As a teacher, though, Excel does lack a function that I use all the time — converting a raw score into a standardised score. In the most common circumstance, this means interpolating between grade boundaries to produce either a “UMS” score or an FFT-style score.

For instance, a student scores 34/90 on a paper with grade boundaries of 31 (grade C, 96 UMS) and 43 (grade B, 112 UMS). A score of 34 is therefore a B, but how many UMS? More than 96, but less than 112. The exact figure requires “linearly interpolating” between the 96 and the 112:

Student’s unified score = 96 + [ ((34 – 31) / (43 – 31)) x (112 – 96) ] = 96 + [ 3/12 x 16 ] = 100

Excel can do this, but it is a little cumbersome to type in every time, especially when all the numbers need collecting from different places first.

Instead, I wrote a custom function to do it for me:

Student’s unified score = UMS (student’s score, grade boundaries, ums values, decimal places)

All it does is the calculation above, but without the headache of having to code it every time. The only downside is that you have to save your Excel spreadsheet as a “macro-enabled” spreadsheet, with the extension .xlsm, but that’s a small thing.

Here’s an example of the function in use: UMS Function.xlsm

Feel free to copy the code into your own spreadsheets if you can find a use for it. To see the code, click “Developer” on the ribbon, and then “Visual Basic”. (If you can’t see “Developer”, you need to add it by clicking on “File” -> “Options” -> “Customize ribbon” and select the “Developer” check box.)

Note: when opening macro-enabled spreadsheets, it is a feature of Excel to disable macros and require users to choose to re-enable them each time. This is an annoying but prudent security feature, as it is possible to write macros that harm your computer. A sensible response is to leave the macros disabled on first opening a new file and to look at the macro’s code (as above) to check there’s nothing malicious going on. You can then reopen the file and choose to re-enable the macros knowing that it is safe to do so.

Note, the grade boundaries for the above are taken from AQA’s Uniform Mark Scale Converter Tool. Don’t get me started on 34/90 qualifying students for a B grade…

Inspirational posters for the classroom wall

I made these A3 posters to get students thinking about the bigger picture. I tweeted some (low res) pictures a while ago, but have recently been asked for the original templates, which are print quality. So, here they are:

A3 portrait posters (.pptx file)

A3 landscape posters (.pptx file)

This slideshow requires JavaScript.

Replacing National Curriculum levels

The original system of National Curriculum levels was based upon children’s cognitive development as they grow older. The levels began with descriptors covering low-level demand and these developed broadly in line with Bloom’s taxonomy up through to levels 7 and 8.  There was a general understanding within education of where on the ladder children were most likely to be at different ages.

In principle, the approach is educationally sound as a formative tool, but in practice it was also used as an accountability measure and this caused it to develop two unassailable operational flaws — flaws that have, 20 or more years after the system’s inception, finally killed it off.

The first operational flaw surrounds the application of the descriptors.  Their appropriate use is to assess the performance of a pupil on a particular task.  But given that every task has different demands (in a myriad different ways) it would be wrong to assume that assessing the same child on the same day but on a different task would result in the same level.  So a teacher’s data on a child will span a range of levels and yet we insist that they report only a single level when held to account (to parents, to the school, even to the child).  Levels are not percentages in tests, they cannot be averaged, and when we do collapse the range to a single “representative” value we give a poor indication of progress: we ignore the wonderful pieces of work, cancelled out as they are by the less impressive (but unrelated) pieces of work from a different day.

Which leads us to the second operational flaw: the need to demonstrate progress.  It is probably broadly true to say that children, on average, make two levels of progress over a key stage.  But that’s it.  Beyond that, we must remember that some will progress more and some less (and often this goes with ability) and that the rate of progress is very unlikely to be linear.  The need to record definite progress at closely spaced intervals (a half term, a term, even an entire year) has led to the confusion of “sublevels” and the nonsense of “two sublevels of progress per year”.  This grew (reasonably?) from teachers’ desire to be able to say things like “well they occasionally did some level 5 work alongside lots of level 4 work at the start of the year, but now they consistently produce level 5 work with the occasional piece of level 6 work”, but when reduced to “they’ve moved from a 5c to a 5a”, the educational meaning is not only lost, but perverted.

I will now argue for an alternative that attempts to retain the original focus on cognitive demand (which I think is correct), but is free of the complicated smoke screen of sublevels.  It is not profound, just a change of emphasis.

I would argue that we should report a child’s ability in a subject relative to age-specific criteria (rather than all-encompassing ones that the child progresses through as they grow older), and that this be done reasonably bluntly so as not to give anyone (parents, government, even the teachers themselves) the impression that a finer simplistic representation is even possible.  This could work by reporting each child as either “foundational”, “secure”, “established” or “exceptional” in each subject at the point in time that the judgement is being made.  This judgement would be made as described in the next paragraph, but, crucially, would not be a progress measure in the way levels were: there would be no expectation for a child to progress from “foundational” to “secure” by the next reporting window, because the next reporting window would have different criteria for each category.  The measure would be a constant snapshot of current performance, more akin to GCSE grades than National Curriculum levels in that regard, but based upon the same underlying cognitive basis as levels originally were intended to be, rather than a percentage mark in a test.

So, how would a teacher “categorise” pupils into one of the four divisions?  The drivers here would be what is useful.  One important use would be to answer a parent’s most basic question: “how is my child doing in your subject?”.  I would argue that, accepting for the necessary tolerances of not knowing what the future holds for individuals, “foundational” pupils at KS3 should most likely secure grades 1, 2 or 3 (ie D to G) in their GCSEs at the end of Year 11.  “Secure” pupils should most likely go on to gets grades 4 or 5 (C to lower B); “established” grades 6 or 7 (higher B to lower A), and “exceptional” 8 or 9 (higher A to A*).  A school could use this guide to generate approximate percentages to loosely check teachers’ interpretations: for instance, perhaps 10% exceptional, 25% established, 50% secure, 15% foundational.  In this way, a child and their parents build up, over time, an understanding of the child’s strengths and weaknesses in a very transparent manner — certainly more transparently than levels allowed.

That is a retrospective view point of course, and could only ever be a loose guide, anyway.  In reality, that guide would need to inform a school’s (and each department’s) approach to how to differentiate schemes of learning and individual lesson objectives and tasks so as to create appropriate challenge.  For instance, in an All-Most-Some model, the lowest demand objectives would be aimed towards supporting the “foundational” pupils, whereas the slightly more demanding objectives would support the “secure” pupils.  If the objectives are written correctly, a pupil’s ability to access them would reveal which “category” they are mostly operating within.  Schemes of work would thus be written to match a level of cognitive of demand whose differentiation is decided in advance (Bloom, SOLO, etc), perhaps hanging together on the key assessment opportunities that will allow the teacher to make a judgement (as York Science promotes).  Those judgements would formatively allow the pupil to know how to progress and also allow the teacher to anchor their judgement in something concrete.

This is not National Curriculum levels using different words.  It takes the best of the levels (criteria based upon cognitive demand), but dispenses with both “expected progress” and with it a “fine” representation of a pupil’s ability.  The fine detail will still be there — but in its entirety in the teacher’s markbook, where it can be usefully used in a formative manner.  And pupils will still progress — but against criteria laid out topic-by-topic in carefully crafted schemes of learning and lesson activities, rather than against a generic set of descriptors designed to span more than a decade of learning across a dozen disparate subjects.

Teachers know how to assess and they know how to move individuals forward — and they know that it is all about details.  Learning (and its assessment) needs to be matched to each individual objective and “overall” progress is a hazy notion that cannot be captured accurately or usefully in a snapshot, let alone in a single digit.

Teachers should be encouraged (and allowed) to commit to crafting superb activities (and assessment criteria) to move pupils through the cognitive demands that are inherent to the concepts in front of them at each moment in time.  And when they are then asked to report a child’s achievements in a subject, they should be allowed to give “woolly” snapshots (more an indication of future GCSE performance than anything else, so that pupils and parents can tell strengths from weaknesses), with the detail being conveyed in the feedback in an exercise book, the conversations of a parents’ evening or the written targets of the annual report.  How a subject department turns their internal data into a whole-school categorisation would be up to them, monitored and tweaked retrospectively by how accurate an indicator it turns out to be.  But it would also be the key driver for ensuring learning objectives are pitched at the correct level of demand in every lesson, for every child, which is, I think, true to the spirit of the original National Curriculum levels.