If you are a a teacher and you use Google Calendar, you might like to add your timetable to it. It would take a while to add over 1000 events to the calendar one at a time, though, so I have written a quick Google Sheet/Script to automate it: https://drive.google.com/open?id=1U6dXaqh0V8a0vb-pJbh8aNuCac-olej2Yh-y7mVbL8E Just edit the sheets to match your own timetable and school day timings hit the “add my timetable” button. It takes no more than 15 minutes. Enjoy!
Author: S Billington
Kinematics resources
A collection of my favourite resources when teaching kinematics.
Here is a collection of my favourite resources when teaching kinematics. If you use different ones, please add them in the Comments at the bottom.
Distance- and Velocity- Time graphs:
The Universe and More Graph Game
and
PhET simulation (The Moving Man)
(PhET has dozens of high quality applets, if you’ve not seen them before.)
Vector addition of velocities:
Mythbusters clip (via YouTube)
Deriving the Equations of Motion for Uniform Acceleration:
My own worked derivation (via YouTube)
Introducing projectiles:
The Mega Whoosh (beware!) (via YouTube)
Simple practice with projectiles:
PhET simulation (Projectile Motion)
Independence of vertical and horizontal motion:
Mythbusters clip (via YouTube)
The Monkey and the Hunter:
Alom Shaha’s Monkey and Hunter videos (National STEM Centre video)
(Teacher instructions version and student version)
and
Stage show clip from University of Minnesota
Also available on YouTube here.
(Both have parent pages, full of other videos, incidentally:
National STEM Centre Physics Demonstration Films and
University of Minnesota Physics Outreach Programme )
Worked examination questions:
Eclipse resources
Here are two PowerPoint presentations I’ve put together for teachers at my school to use with their classes prior to Friday’s partial eclipse (20th March). Feel free to use/modify them for use with your own classes it they’re any use to you.
This one is basic, aimed at tutors getting their tutor groups in the mood:
This one is aimed more at science teachers able to talk their classes through the slides (see slide notes). The underlying original images are taken from Google searches or this PPT that John Hudson has put on TES Resources. Sorry, that’s the best I can do by way of acknowledgement!
Fingers crossed for clear skies!
Excel UMS function
Over the past decade, my departmental recording spreadsheets that track student data have evolved into pretty sophisticated affairs, making full use of Excel’s functionality to make my life easier.
As a teacher, though, Excel does lack a function that I use all the time — converting a raw score into a standardised score. In the most common circumstance, this means interpolating between grade boundaries to produce either a “UMS” score or an FFT-style score.
For instance, a student scores 34/90 on a paper with grade boundaries of 31 (grade C, 96 UMS) and 43 (grade B, 112 UMS). A score of 34 is therefore a B, but how many UMS? More than 96, but less than 112. The exact figure requires “linearly interpolating” between the 96 and the 112:
Student’s unified score = 96 + [ ((34 – 31) / (43 – 31)) x (112 – 96) ] = 96 + [ 3/12 x 16 ] = 100
Excel can do this, but it is a little cumbersome to type in every time, especially when all the numbers need collecting from different places first.
Instead, I wrote a custom function to do it for me:
Student’s unified score = UMS (student’s score, grade boundaries, ums values, decimal places)
All it does is the calculation above, but without the headache of having to code it every time. The only downside is that you have to save your Excel spreadsheet as a “macro-enabled” spreadsheet, with the extension .xlsm, but that’s a small thing.
Here’s an example of the function in use: UMS Function.xlsm
Feel free to copy the code into your own spreadsheets if you can find a use for it. To see the code, click “Developer” on the ribbon, and then “Visual Basic”. (If you can’t see “Developer”, you need to add it by clicking on “File” -> “Options” -> “Customize ribbon” and select the “Developer” check box.)
Note: when opening macro-enabled spreadsheets, it is a feature of Excel to disable macros and require users to choose to re-enable them each time. This is an annoying but prudent security feature, as it is possible to write macros that harm your computer. A sensible response is to leave the macros disabled on first opening a new file and to look at the macro’s code (as above) to check there’s nothing malicious going on. You can then reopen the file and choose to re-enable the macros knowing that it is safe to do so.
Note, the grade boundaries for the above are taken from AQA’s Uniform Mark Scale Converter Tool. Don’t get me started on 34/90 qualifying students for a B grade…
Inspirational posters for the classroom wall
I made these A3 posters to get students thinking about the bigger picture. I tweeted some (low res) pictures a while ago, but have recently been asked for the original templates, which are print quality. So, here they are:
Replacing National Curriculum levels
The original system of National Curriculum levels was based upon children’s cognitive development as they grow older. The levels began with descriptors covering low-level demand and these developed broadly in line with Bloom’s taxonomy up through to levels 7 and 8. There was a general understanding within education of where on the ladder children were most likely to be at different ages.
In principle, the approach is educationally sound as a formative tool, but in practice it was also used as an accountability measure and this caused it to develop two unassailable operational flaws — flaws that have, 20 or more years after the system’s inception, finally killed it off.
The first operational flaw surrounds the application of the descriptors. Their appropriate use is to assess the performance of a pupil on a particular task. But given that every task has different demands (in a myriad different ways) it would be wrong to assume that assessing the same child on the same day but on a different task would result in the same level. So a teacher’s data on a child will span a range of levels and yet we insist that they report only a single level when held to account (to parents, to the school, even to the child). Levels are not percentages in tests, they cannot be averaged, and when we do collapse the range to a single “representative” value we give a poor indication of progress: we ignore the wonderful pieces of work, cancelled out as they are by the less impressive (but unrelated) pieces of work from a different day.
Which leads us to the second operational flaw: the need to demonstrate progress. It is probably broadly true to say that children, on average, make two levels of progress over a key stage. But that’s it. Beyond that, we must remember that some will progress more and some less (and often this goes with ability) and that the rate of progress is very unlikely to be linear. The need to record definite progress at closely spaced intervals (a half term, a term, even an entire year) has led to the confusion of “sublevels” and the nonsense of “two sublevels of progress per year”. This grew (reasonably?) from teachers’ desire to be able to say things like “well they occasionally did some level 5 work alongside lots of level 4 work at the start of the year, but now they consistently produce level 5 work with the occasional piece of level 6 work”, but when reduced to “they’ve moved from a 5c to a 5a”, the educational meaning is not only lost, but perverted.
I will now argue for an alternative that attempts to retain the original focus on cognitive demand (which I think is correct), but is free of the complicated smoke screen of sublevels. It is not profound, just a change of emphasis.
I would argue that we should report a child’s ability in a subject relative to age-specific criteria (rather than all-encompassing ones that the child progresses through as they grow older), and that this be done reasonably bluntly so as not to give anyone (parents, government, even the teachers themselves) the impression that a finer simplistic representation is even possible. This could work by reporting each child as either “foundational”, “secure”, “established” or “exceptional” in each subject at the point in time that the judgement is being made. This judgement would be made as described in the next paragraph, but, crucially, would not be a progress measure in the way levels were: there would be no expectation for a child to progress from “foundational” to “secure” by the next reporting window, because the next reporting window would have different criteria for each category. The measure would be a constant snapshot of current performance, more akin to GCSE grades than National Curriculum levels in that regard, but based upon the same underlying cognitive basis as levels originally were intended to be, rather than a percentage mark in a test.
So, how would a teacher “categorise” pupils into one of the four divisions? The drivers here would be what is useful. One important use would be to answer a parent’s most basic question: “how is my child doing in your subject?”. I would argue that, accepting for the necessary tolerances of not knowing what the future holds for individuals, “foundational” pupils at KS3 should most likely secure grades 1, 2 or 3 (ie D to G) in their GCSEs at the end of Year 11. “Secure” pupils should most likely go on to gets grades 4 or 5 (C to lower B); “established” grades 6 or 7 (higher B to lower A), and “exceptional” 8 or 9 (higher A to A*). A school could use this guide to generate approximate percentages to loosely check teachers’ interpretations: for instance, perhaps 10% exceptional, 25% established, 50% secure, 15% foundational. In this way, a child and their parents build up, over time, an understanding of the child’s strengths and weaknesses in a very transparent manner — certainly more transparently than levels allowed.
That is a retrospective view point of course, and could only ever be a loose guide, anyway. In reality, that guide would need to inform a school’s (and each department’s) approach to how to differentiate schemes of learning and individual lesson objectives and tasks so as to create appropriate challenge. For instance, in an All-Most-Some model, the lowest demand objectives would be aimed towards supporting the “foundational” pupils, whereas the slightly more demanding objectives would support the “secure” pupils. If the objectives are written correctly, a pupil’s ability to access them would reveal which “category” they are mostly operating within. Schemes of work would thus be written to match a level of cognitive of demand whose differentiation is decided in advance (Bloom, SOLO, etc), perhaps hanging together on the key assessment opportunities that will allow the teacher to make a judgement (as York Science promotes). Those judgements would formatively allow the pupil to know how to progress and also allow the teacher to anchor their judgement in something concrete.
This is not National Curriculum levels using different words. It takes the best of the levels (criteria based upon cognitive demand), but dispenses with both “expected progress” and with it a “fine” representation of a pupil’s ability. The fine detail will still be there — but in its entirety in the teacher’s markbook, where it can be usefully used in a formative manner. And pupils will still progress — but against criteria laid out topic-by-topic in carefully crafted schemes of learning and lesson activities, rather than against a generic set of descriptors designed to span more than a decade of learning across a dozen disparate subjects.
Teachers know how to assess and they know how to move individuals forward — and they know that it is all about details. Learning (and its assessment) needs to be matched to each individual objective and “overall” progress is a hazy notion that cannot be captured accurately or usefully in a snapshot, let alone in a single digit.
Teachers should be encouraged (and allowed) to commit to crafting superb activities (and assessment criteria) to move pupils through the cognitive demands that are inherent to the concepts in front of them at each moment in time. And when they are then asked to report a child’s achievements in a subject, they should be allowed to give “woolly” snapshots (more an indication of future GCSE performance than anything else, so that pupils and parents can tell strengths from weaknesses), with the detail being conveyed in the feedback in an exercise book, the conversations of a parents’ evening or the written targets of the annual report. How a subject department turns their internal data into a whole-school categorisation would be up to them, monitored and tweaked retrospectively by how accurate an indicator it turns out to be. But it would also be the key driver for ensuring learning objectives are pitched at the correct level of demand in every lesson, for every child, which is, I think, true to the spirit of the original National Curriculum levels.