Skip to content

Add your timetable to Google Calendar

Timetable on Google Calendar A couple of years ago, I switched from a paper diary to Google Calendar and have never looked back. This year, I went one step better by adding my timetable to my calendar as well. It would take a while to add over 1000 events to the calendar one at a time, though, so I wrote a quick Google Sheet/Script to automate it: https://drive.google.com/open?id=1I9In6vG2B0C-jVHePxzbko8hKFujh60HsIEgqVLZMKA Just edit the sheets to match your own timetable and school day timings hit the “add my timetable” button. Takes no more than 10 minutes. Enjoy!

Writing an Android App…For Beginners!

A fortnight ago I decided to see if I could learn how to write an Android app…and yesterday I finished and published it (see it here and on the daydrop page, above)!  Given the steep leaning curve, I thought I’d write down the most important “things to know”, in case it’s helpful to anyone else thinking of writing their own app.

Experience

I already had some experience of coding, most recently in writing a web app using HTML, PHP and MySQL.  These three are not needed at all to write an app, but the underlying principles of how to write any code are.  Your learning curve will be even steeper if you’ve never coded at all before.

Android Studio

Android apps are a bundle of code (function) and resources (appearance).  You write the code and create the resources, but the whole thing is logistically coordinated by a free “software development kit” (SDK) from Google, called Android Studio.  (You’ll need to download and install this, following the instructions here.)   You write all your code within the Android Studio application, together with the designs for all of the different screen displays of your app, and you also store all of your resources (e.g. graphics) within its folder structure.  Android Studio will help you to test your app throughout your writing and, when you’ve finished it, will collect it all together into an “Android Package” file (.apk) which is what you upload to Google Play when you publish your app.

When you first Open Android Studio, it looks horrendously complicated. However, you only need to focus on a few key areas of the folder tree that it shows you:

  • the “Manifest” lists all of the the things your app can do and needs;
  • the “Java” files are your code;
  • the “layout” files are what your screen looks like;
  • the “menu” files are for the Android Action Bar drop down menus;
  • the “drawables” and “mipmap” folders hold your graphics;
  • “values” hold your styles/sizes (font sizes, colours, etc) and your user-visible text (the words on any screen).

Note the separation of all these differnt aspects.  This modularity allows you to very easily change what your screen looks like on a small-screen phone compared to a 10″ tablet, or to translate your app from English into Spanish, while not having to rewrite your code or publish different versions of the app.  You just provide alternative resources for different situations and Android takes care of the rest.  It’s a very good set up.

Programming languages

Your functional code is written in a programming language called Java, which is one of the most prevalent programming languages used in the world today.  (It has no connection at all to JavaScript, don’t muddle them up!)  Java is “object-oriented”, which takes some getting your head around if you’ve never experienced  the format before.  I found this book quite helpful in that regard, although it only covers Java generically and doesn’t include anything about the hundreds of Android-specific “classes” and “methods” on which your code will depend.  Note  that the Java coding software (JDK) is different to the Java running software (JRE).  You probably already have the latter installed on your computer (your web browser needs it) but you will have to install the former as part of the installation of Android Studio.

Your layouts are written in XML, which is tag-based, like HTML.  I didn’t use a book for that, as it is much more intuitive.

For both, my greatest resource has been the StackOverflow website.  Google whatever it is you’re stuck on and then read all of the answers from that website!  I haven’t yet read anything anywhere near as useful on any other website at all.

App basics

Every screen your app’s user sees is called an “activity”. Every activity stands alone – it has its own Java code, it’s own layout, it’s own menus (although you can reuse layouts and menus if you really want to). To switch to another activity, the Java code of the current activity sends a message (called an “intent”) to the Android system requesting that the system starts the new activity.  The fact that it is the Adroid system that is in ultimate control, and not your app, is one of the powers of Android: because you can request to start an activity from a completely different app, or you can allow other apps to start any one of your activities.  For instance, one of my activities starts the “capture video” activity of the device’s camera app, before coming back when it is finished.

In the absence of an intent to start a new activity, the Java code of the current activity sits and “listens” for events, like button presses, etc, and you write into your code all the different things it should do when those events happen.

Testing

You’ll do this constantly as you go, by pressing the “run” button inside Android Studio.  It is a lot easier to do on a real device (connected by USB cable), but Android Studio also provides dozens of emulated devices as well (although these do tend to run a little slow, I have found).

Publishing

And that’s the “making”.  To publish it on Google Play, you’ll need a Google account, on which you’ll need to activate Google Wallet if you want to sell your app for money.  You then pay Google a one-off $25 (£16) fee, upload some artwork, screen shots and explanatory text and hit the “publish” button.

Then try your best to spread the word.  🙂

Kinematics resources

Here is a collection of my favourite resources when teaching kinematics.  If you use different ones, please add them in the Comments at the bottom.

Distance- and Velocity- Time graphs:

The Universe and More Graph Game

and

PhET simulation (The Moving Man)

(PhET has dozens of high quality applets, if you’ve not seen them before.)

Vector addition of velocities:

Mythbusters clip (via YouTube)

Deriving the Equations of Motion for Uniform Acceleration:

My own worked derivation (via YouTube)

Introducing projectiles:

The Mega Whoosh (beware!) (via YouTube)

Simple practice with projectiles:

PhET simulation (Projectile Motion)

Independence of vertical and horizontal motion:

Mythbusters clip (via YouTube)

The Monkey and the Hunter:

Alom Shaha’s Monkey and Hunter videos (National STEM Centre video)
(Teacher instructions version and student version)

and

Stage show clip from University of Minnesota
Also available on YouTube here.

(Both have parent pages, full of other videos, incidentally:
National STEM Centre Physics Demonstration Films and
University of Minnesota Physics Outreach Programme )

Worked examination questions:

DrPhysicsA Worked example questions (via YouTube)

Eclipse resources

Here are two PowerPoint presentations I’ve put together for teachers at my school to use with their classes prior to Friday’s partial eclipse (20th March).  Feel free to use/modify them for use with your own classes it they’re any use to you.

This one is basic, aimed at tutors getting their tutor groups in the mood:

Eclipse for tutors

This one is aimed more at science teachers able to talk their classes through the slides (see slide notes).  The underlying original images are taken from Google searches or this PPT that John Hudson has put on TES Resources.  Sorry, that’s the best I can do by way of acknowledgement!

Eclipse for science teachers

Fingers crossed for clear skies!

Why it is right to remove assessed practical work from GCSE sciences

When Ofqual suggested in December that the new GCSE science qualifications should not contain any assessed practical work, I couldn’t believe it.  As a science teacher, this was music to my ears and it seemed too good to be true.

Against this rare appearance of optimism when considering edicts about curriculum matters, I later felt anger and confusion when both the Education Secretary and national science organisations (reported here) objected to the decision.  How could they disagree?

Having reflected, I can now see that we are coming at the decision from opposite directions.  As a teacher, to me “no assessed practical work” translates to “no internal assessment”, whereas perhaps to other stake-holders it perhaps sounds a lot like “no longer any need for any practical work in science education”.  The key difference that needs to be appreciated, however, is that internal assessment never has and never can work and it is vital that it is removed completely, once and for all…but this does not necessarily lead to practical-free science lessons.

In my experience, it is very rare for an officially-produced document to contain so much good sense, but when I read Ofqual’s consultation I agreed with nearly all aspects it, with the bottom line being, for me, that internal assessment is fundamentally incompatible with an accountability framework.  You can’t ask teachers to impartially conduct assessments with their students when it affects both their own pay progression and also the whole school’s standing in national league tables.  This is not to suggest that the majority of teachers break any rules, just that they are incentivised into devoting an unhealthy amount of time to playing the system to the maximum to give their students the very best chance.  This isn’t practical work, it is hoop-jumping.  It is further compounded by incredible systemic flaws, eg assuming equivalence between different controlled assessments; 50 raw marks being equivalent to 100UMS (ie just 10 raw marks, internally assessed, are worth half an entire GCSE grade overall); poor marking guidance allowing an inordinate amount of subjectivity; controlled assessments being common across different GCSEs (e.g. Physics and Additional Science), with the grade boundaries a compromise for both…  (All of that based on AQA’s GCSE controlled assessments, which is what I teach.)

Even pre-2006, when we assessed practical work via essays using “POAE”, it was still hoop-jumping:  you gave students tick-lists and made them keep rewriting it until they met the criteria.  Now we run intensive off-timetable days in which we knock out the assessed components, interspersed with intensive prep lessons moments before each component.  It’s nonsense how the assessment has been allowed to drive the teaching.  “Teaching”, not “Teaching and Learning”, incidentally — there is no learning associated with controlled assessments.

To non-teachers, I would emphasise one more important thing:  it is only through delivering these internal assessments, day in day out, class after class, year after year, that you can truly understand the inappropriateness of any hypothetical system of internal assessment.  It is fundamentally opposed to both the prioritising of learning and the use of the outcomes for accountability.  The situation has always been better at A-level, because cohort numbers are smaller and the range of ability is smaller and higher, but at GCSE it is unsalvageable.

So, I am very happy indeed to lose internal assessment.  The danger, of course, is that the systemic incentivisation to devote curriculum time to only that which improves grades is still there and without practical work being a benefit to grades there is the very real possibility of schools abandoning it.  This is especially true when placed alongside shrinking budgets and the expense of laboratories, equipment, consumables and technicians.  But this isn’t what Ofqual has suggested either:  there is an explicit statement that conducting practical work will give students better access to 15% of the marks on the written examinations.

This is a compromise, of course, and we are yet to see either the details (what practicals will be chosen?) or the application of the Law of Unintended Consequences.  Perhaps the practicals will not be ideal?  Perhaps schools will simply demonstrate instead of allowing full-class practicals?  Perhaps exam fees will go up, squeezing budgets further.  Perhaps the exam questions will be shockingly poor (we’re certainly seen enough of that over the years).  Time will tell, but I cannot see how any future situation could possibly be as flawed as the current situation (famous last words!) and so I am very happy to try a different approach.

So I am delighted to see Ofqual standing their ground, despite so much criticism from such established bodies.  As a pracising teacher, I will ensure that practical work remains at the heart of what I do, but I and my students will no longer be shackled by such flawed assessment principles that have so negatively distorted the curriculum and the rigor of its assessment for the last 20 years.

Excel UMS function

Over the past decade, my departmental recording spreadsheets that track student data have evolved into pretty sophisticated affairs, making full use of Excel’s functionality to make my life easier.

As a teacher, though, Excel does lack a function that I use all the time — converting a raw score into a standardised score. In the most common circumstance, this means interpolating between grade boundaries to produce either a “UMS” score or an FFT-style score.

For instance, a student scores 34/90 on a paper with grade boundaries of 31 (grade C, 96 UMS) and 43 (grade B, 112 UMS). A score of 34 is therefore a B, but how many UMS? More than 96, but less than 112. The exact figure requires “linearly interpolating” between the 96 and the 112:

Student’s unified score = 96 + [ ((34 – 31) / (43 – 31)) x (112 – 96) ] = 96 + [ 3/12 x 16 ] = 100

Excel can do this, but it is a little cumbersome to type in every time, especially when all the numbers need collecting from different places first.

Instead, I wrote a custom function to do it for me:

Student’s unified score = UMS (student’s score, grade boundaries, ums values, decimal places)

All it does is the calculation above, but without the headache of having to code it every time. The only downside is that you have to save your Excel spreadsheet as a “macro-enabled” spreadsheet, with the extension .xlsm, but that’s a small thing.

Here’s an example of the function in use: UMS Function.xlsm

Feel free to copy the code into your own spreadsheets if you can find a use for it. To see the code, click “Developer” on the ribbon, and then “Visual Basic”. (If you can’t see “Developer”, you need to add it by clicking on “File” -> “Options” -> “Customize ribbon” and select the “Developer” check box.)

Note: when opening macro-enabled spreadsheets, it is a feature of Excel to disable macros and require users to choose to re-enable them each time. This is an annoying but prudent security feature, as it is possible to write macros that harm your computer. A sensible response is to leave the macros disabled on first opening a new file and to look at the macro’s code (as above) to check there’s nothing malicious going on. You can then reopen the file and choose to re-enable the macros knowing that it is safe to do so.

Note, the grade boundaries for the above are taken from AQA’s Uniform Mark Scale Converter Tool. Don’t get me started on 34/90 qualifying students for a B grade…

Inspirational posters for the classroom wall

I made these A3 posters to get students thinking about the bigger picture. I tweeted some (low res) pictures a while ago, but have recently been asked for the original templates, which are print quality. So, here they are:

A3 portrait posters (.pptx file)

A3 landscape posters (.pptx file)

This slideshow requires JavaScript.

Replacing National Curriculum levels

The original system of National Curriculum levels was based upon children’s cognitive development as they grow older. The levels began with descriptors covering low-level demand and these developed broadly in line with Bloom’s taxonomy up through to levels 7 and 8.  There was a general understanding within education of where on the ladder children were most likely to be at different ages.

In principle, the approach is educationally sound as a formative tool, but in practice it was also used as an accountability measure and this caused it to develop two unassailable operational flaws — flaws that have, 20 or more years after the system’s inception, finally killed it off.

The first operational flaw surrounds the application of the descriptors.  Their appropriate use is to assess the performance of a pupil on a particular task.  But given that every task has different demands (in a myriad different ways) it would be wrong to assume that assessing the same child on the same day but on a different task would result in the same level.  So a teacher’s data on a child will span a range of levels and yet we insist that they report only a single level when held to account (to parents, to the school, even to the child).  Levels are not percentages in tests, they cannot be averaged, and when we do collapse the range to a single “representative” value we give a poor indication of progress: we ignore the wonderful pieces of work, cancelled out as they are by the less impressive (but unrelated) pieces of work from a different day.

Which leads us to the second operational flaw: the need to demonstrate progress.  It is probably broadly true to say that children, on average, make two levels of progress over a key stage.  But that’s it.  Beyond that, we must remember that some will progress more and some less (and often this goes with ability) and that the rate of progress is very unlikely to be linear.  The need to record definite progress at closely spaced intervals (a half term, a term, even an entire year) has led to the confusion of “sublevels” and the nonsense of “two sublevels of progress per year”.  This grew (reasonably?) from teachers’ desire to be able to say things like “well they occasionally did some level 5 work alongside lots of level 4 work at the start of the year, but now they consistently produce level 5 work with the occasional piece of level 6 work”, but when reduced to “they’ve moved from a 5c to a 5a”, the educational meaning is not only lost, but perverted.

I will now argue for an alternative that attempts to retain the original focus on cognitive demand (which I think is correct), but is free of the complicated smoke screen of sublevels.  It is not profound, just a change of emphasis.

I would argue that we should report a child’s ability in a subject relative to age-specific criteria (rather than all-encompassing ones that the child progresses through as they grow older), and that this be done reasonably bluntly so as not to give anyone (parents, government, even the teachers themselves) the impression that a finer simplistic representation is even possible.  This could work by reporting each child as either “foundational”, “secure”, “established” or “exceptional” in each subject at the point in time that the judgement is being made.  This judgement would be made as described in the next paragraph, but, crucially, would not be a progress measure in the way levels were: there would be no expectation for a child to progress from “foundational” to “secure” by the next reporting window, because the next reporting window would have different criteria for each category.  The measure would be a constant snapshot of current performance, more akin to GCSE grades than National Curriculum levels in that regard, but based upon the same underlying cognitive basis as levels originally were intended to be, rather than a percentage mark in a test.

So, how would a teacher “categorise” pupils into one of the four divisions?  The drivers here would be what is useful.  One important use would be to answer a parent’s most basic question: “how is my child doing in your subject?”.  I would argue that, accepting for the necessary tolerances of not knowing what the future holds for individuals, “foundational” pupils at KS3 should most likely secure grades 1, 2 or 3 (ie D to G) in their GCSEs at the end of Year 11.  “Secure” pupils should most likely go on to gets grades 4 or 5 (C to lower B); “established” grades 6 or 7 (higher B to lower A), and “exceptional” 8 or 9 (higher A to A*).  A school could use this guide to generate approximate percentages to loosely check teachers’ interpretations: for instance, perhaps 10% exceptional, 25% established, 50% secure, 15% foundational.  In this way, a child and their parents build up, over time, an understanding of the child’s strengths and weaknesses in a very transparent manner — certainly more transparently than levels allowed.

That is a retrospective view point of course, and could only ever be a loose guide, anyway.  In reality, that guide would need to inform a school’s (and each department’s) approach to how to differentiate schemes of learning and individual lesson objectives and tasks so as to create appropriate challenge.  For instance, in an All-Most-Some model, the lowest demand objectives would be aimed towards supporting the “foundational” pupils, whereas the slightly more demanding objectives would support the “secure” pupils.  If the objectives are written correctly, a pupil’s ability to access them would reveal which “category” they are mostly operating within.  Schemes of work would thus be written to match a level of cognitive of demand whose differentiation is decided in advance (Bloom, SOLO, etc), perhaps hanging together on the key assessment opportunities that will allow the teacher to make a judgement (as York Science promotes).  Those judgements would formatively allow the pupil to know how to progress and also allow the teacher to anchor their judgement in something concrete.

This is not National Curriculum levels using different words.  It takes the best of the levels (criteria based upon cognitive demand), but dispenses with both “expected progress” and with it a “fine” representation of a pupil’s ability.  The fine detail will still be there — but in its entirety in the teacher’s markbook, where it can be usefully used in a formative manner.  And pupils will still progress — but against criteria laid out topic-by-topic in carefully crafted schemes of learning and lesson activities, rather than against a generic set of descriptors designed to span more than a decade of learning across a dozen disparate subjects.

Teachers know how to assess and they know how to move individuals forward — and they know that it is all about details.  Learning (and its assessment) needs to be matched to each individual objective and “overall” progress is a hazy notion that cannot be captured accurately or usefully in a snapshot, let alone in a single digit.

Teachers should be encouraged (and allowed) to commit to crafting superb activities (and assessment criteria) to move pupils through the cognitive demands that are inherent to the concepts in front of them at each moment in time.  And when they are then asked to report a child’s achievements in a subject, they should be allowed to give “woolly” snapshots (more an indication of future GCSE performance than anything else, so that pupils and parents can tell strengths from weaknesses), with the detail being conveyed in the feedback in an exercise book, the conversations of a parents’ evening or the written targets of the annual report.  How a subject department turns their internal data into a whole-school categorisation would be up to them, monitored and tweaked retrospectively by how accurate an indicator it turns out to be.  But it would also be the key driver for ensuring learning objectives are pitched at the correct level of demand in every lesson, for every child, which is, I think, true to the spirit of the original National Curriculum levels.

Using Twitter professionally to network with other teachers

Previous : A guide to using Twitter in teaching and Using Twitter with pupils

I use my professional Twitter account, @StuBillington, to follow and be followed by others involved in education. I get lots of ideas, information and resources from those I follow and I try to tweet things that I think they might find useful in return.

I don’t use my professional Twitter account to tweet about my personal life (e.g. what I think about the form of the football team I support). I reason that this might be relevant to my friends, but probably not to my teacher peers who are busy people and who don’t need to wade through that kind of stuff when looking through the hundreds of tweets on their feed. If I wanted to do that, I’d get another Twitter account for my personal life. (Although, actually, I probably wouldn’t, as I wouldn’t want my students stumbling across my life on show!)

I follow nearly 500 people, a number that is increasing all the time as I come across other like-minded people saying things I’m interested in listening to. Finding people to follow is relatively easy: initially, I just looked at who other people were following and, well, followed suit! If you wanted a starting point for some teachers to follow, you could take a look at some of the 500 or so I’m following. There are also “lists” of teachers on Twitter, maintained by attentive teachers; you’ll stumble across those as you look for people to follow.

Some of the people I follow tweet every day, some of them far less frequently. Either way, it adds up to a lot of tweets to read. Top tip: don’t try to read them all. Whenever I make time to look at Twitter, I just look at the most recent few dozen. In fact, timing when you tweet your own tweets is important, if you want to increase the chances of others reading them and not missing them.

Recently, I seem to spend most of my time accessing Twitter directly, through the Twitter app on my iPad. However, there are times when a more sophisticated interface is useful. My personal favourite is Hootsuite, but there are lots of other options out there. I’m also experimenting with Flipboard (an iPad app) at the moment. Perhaps something I’ll blog about in the future.

So, why would you want something more sophisticated? Well, tweets often incorporate “hashtags”: keywords that can be searched for quickly, to assemble a list of tweets on a theme, rather than the random ones on your home page that are limited to the most recent tweets of those you follow. For teachers, one of the most useful is #UKEdChat. For Science teachers, another is #ASEChat. For members of the a school’s leadership team, there’s also #SLTChat. And, more recently, there’s the rapidly upcoming #PedagooFriday. Setting up a Twitter feed for these search terms allows you to rapidily connect with like-minded “tweeps” who perhaps you don’t currently follow. Most importantly, most of those hashtags are primarily for a weekly hour-long online “chat” session, in which lots of Twitter users all tweet on a pre-agreed topic, all ending their tweets with the hashtag, so that their comments are relayed to the other people taking part. This is a very effective and highly regarded type of informal INSET. If you’ve never tried it, you should! #ASEChat is on Monday evenings, 8pm to 9pm, and #UKEdChat is on Thursday evenings, 8pm to 9pm.

Using Twitter with pupils

Previous : A guide to using Twitter in teaching

I use my “Mr Billington” Twitter account, @FallibroomeBIL, to send my pupils things I think that they might like to know about. Sometimes this is a link to something from the world of Science that I think is interesting, sometimes it is a link to online resources that can be used to extend what we’ve done in class, sometimes it is just a message (e.g. information, a question, a joke or congratulations). Crucially, I don’t tweet anything vitally important, as I can’t guarantee that all of the relevant pupils will read it — it’s an “opt in” stream of information, for those that are interested.

While some pupils and parents are “following” @FallibroomeBIL, a lot of my pupils and their parents do not have their own Twitter accounts and so access my tweets simply by periodically visiting the webpage, www.twitter.com/FallibroomeBIL. Although some tell me when they’ve done this, I don’t really have a way to judge how many do this.

As the head of the Science Department at my school, I also maintain a departmental Twitter account as well, @FallibroomeSci. Again, this is for non-vital one-way information giving. However, this is a useful single point of contact for pupils and parents, if they want, as I use the @FallibroomeSci account to follow the individual Twitter feeds of all of the teachers in the Science Department. This is a good way to collect everything together in one place and advertises the existence of the individual Twitter feeds of the teachers.

By the way, it’s useful for a school to adopt an agreed format for all of the Twitter accounts (eg “@Fallibroome…” As it helps pupils and parents to find them all very easily using Twitter’s search box. It’s also useful if the school’s central Twitter account follows them all, too.

I have experimented with using my Twitter account to record (and publicise to parents) the homeworks that I set my classes. This actually worked well, but is limited to 140 characters, which was sometimes not enough. Instead, I switched to a homework blog (mrbillington.org.uk), which allows me to also upload resources, including worksheets, etc. More on this in a future post, but worth drawing attention to here.

One final thing, with relation to maintaining a professional level of contact. A school Twitter account is for communicating information to pupils, but they may “follow” your account using their personal Twitter account. It is vital to never forget that it is not appropriate for a teacher to follow the pupil back, or even to look at the pupil’s feed. Just because we know where our pupils live, it doesn’t mean we should ever go round to their house and spy on them through the front window and it is essential to extend the same social boundaries of the “in person” world into the digital world, too.

Next : Using Twitter professionally to network with other teachers

%d bloggers like this: