Consistency and Coherence

How many types of differentiation do you employ in your teaching? Lots, I’ll guess. How about in your musical ensembles? Err…How often do you ask the drummer to play the clarinet part? Seldom, I’ll guess. Yet that’s what seems to me to be happening in the weird and wacky world of school views of assessment! Let me explain…

The phrase teaching music musically (the title of a 1999 book by Keith Swanwick), is very well known, and often crops up in conversation. I’d like to think that the phrase assessing music musically is also frequently uttered, but I think I may be kidding myself on this one! But assessing music musically is vital, if assessment in music is to have any meaning. I have written, spoken, and blogged about the need to know who any assessment is for when designing it: pupils, teachers, or system? And I know from what is said to me when I meet teachers in schools or at conferences that this resonates with them when I talk about it.

But it seems that some (many?) SLTs just don’t “get it”. So, one of the phrases that I hear a lot at the moment runs along the lines of:

My SLT won’t let me do that, as they say Ofsted are looking for consistency”.

This normally puts an end to assessing music musically, as consistency is mistaken for uniformity. This means that assessing maths mathematically, geography geographically, and English Englishly cannot take place as that would not be consistent (ie uniform) with what the SLT have decided the school is doing.

I think this is wrong, and is like asking the drummer to play the clarinet part in the example I gave above. A one-size-fits-all policy can end up actually fitting no-one. So what is going on? The same SLT would be down on a teacher like a ton of the proverbial bricks if said teacher observed that the reason for their lower sets not doing too well was that “I don’t do differentiation, I assume they are all top sets, and teach them identically”. And quite rightly so! Yet this same attitude doesn’t permeate through to differentiation for the teaching staff. So, can it be consistent for maths (4 lessons a week, say) to have to report assessment levels (or whatever has replaced them) at the same half-termly rate as music (one lesson a fortnight)? No, this is not differentiating, it’s not, in the true sense of the word consistent. Neither is it subject-centred, or even learning-centred.

Which brings me to the word I think we are in danger of losing: coherent. I think it is possible for music to assess musically, maths mathematically, and so on, and for coherence to arise from this. This could be simply in the way, say, that effort grades are given, or the timing of when reports are sent to parents. But it can also be in that subjects are allowed to assess in ways which suit learning and doing on that subject. I think that coherence matters. Of course I would like there to be internal consistency in the musical assessments, but that does not mean that they have to be identical to ones in maths. Coherence arises from understanding this, and this is an area that I feel that SLTs need to understand.

It is, for example, incoherent to say that musical learning can only be assessed through written work. It is also inconsistent. At the moment, schools are gearing up for the musicfest that is Christmas. I’m sure SLTs wont be too impressed by music teachers who say, “we’re not singing carols this year, but year 7 have written some good essays on the origins of English Yule-tide musical genres”. No, we expect carols to be sung, not written about by kids. Likewise I bet the PE staff don’t select the football team on their ability calculate the hyperbolic trajectory of spherical objects to 2 significant figures!

I think ‘consistency’ has become of these Ofsted-myths. I made a typo in a presentation recently, and put “Oftsed”. I think some of these myths are oft-said so frequently they take on what Bruner might have called ‘folk-status’, and people assume they’re true, simply because they have heard them so frequently. Oft-said does not equal Ofsted!

But I know that as a result of me saying this teachers will contact me with examples which when examined closely are not consistent, or at all coherent, and which they are forced to do. I can only offer suggestions, and I feel the time has come to stand up, be counted, and say “coherence matters more than consistency”, and that subject-specificity should not be lost in the steamroller effect of whole-school systems inherently unsuitable for some subjects, in our case, music.

And that’s an argument I shall be making both consistently, and, I hope, coherently in the coming weeks!

Advertisements
This entry was posted in Assessment, Music Education and tagged , . Bookmark the permalink.

4 Responses to Consistency and Coherence

  1. Terry Loane says:

    Thank you, Martin, for another thoughtful and thought-provoking post. I agree that we need to say loudly and clearly that coherence in how we teach/assess music is far more important than consistency, uniformity and standardisation. As Pasi Sahlberg said, “The worst enemy of creativity is standardisation.”

    But we need to ask ourselves where the enthusiasm for “the steamroller.. of whole-school systems inherently unsuitable for some subjects” comes from. The answer is quite simple: fear of Ofsted. Senior managers are, I believe, unlikely to be won over by arguments about coherence if the approach we are advocating might be perceived to pose even a remote risk of negative perceptions by inspectors. The fact that senior managers’ fears may be based on superstitious and false Ofsted myths is irrelevant – they are paralysed by fear (as Bernadette Hunter pointed out in her speech at the NAHT conference last year).

    My first experience of the steamroller of whole-school systems inherently unsuitable for some subjects was back in the 1980s when I was Head of Music at a comprehensive school in the East Midlands. The school introduced a so-called ‘assessment system’ whereby each teacher had to give a grade to each pupil in their subject once or twice a term. I simply refused to do adopt the system within the music department, pointing out both that it was unsuitable for music and more generally that it was statistically invalid regardless of the subject. The system did not last long in the school, partly because of my boycott. But of course my maverick approach simply would not be tolerated in a 21st century school. I may have won a battle back in 1984 but we appear to be losing the war now:-(

    For the answer perhaps we should go to Pasi Sahlberg, whom I quoted in the first paragraph. As many of your readers will know he was the last chief inspector of schools in Finland before school inspection was abolished in that country about 20 years ago. His book ‘Finnish Lessons’ is a remarkable challenge to the standardised, data-driven. compliance-led approach to education that is, I believe, at the root of the problem you have identified.

  2. aspain2014 says:

    Thank you Martin. This entirely resonates with discussions between teachers taking place within Teach Through Music (www.teachthroughmusic.org.uk), as you know. Having attended one of the recent DfE Consultation events on GCSE and A Level (Music) reform, I was struck by how reliability of assessment has become such a huge concern at a national policy level, that it is riding rough shod over validity of assessment. Cost, feasibility and public perceptions of assessment ‘rigour’ – which seems to boil down to stamping out cheating or subjectivity – are dominant concerns. A scary situation, given how important GCSE outcomes are in the way schools are judged, which could – and I fear will – blow arguments for more musical modes of teaching and learning at KS3 out of the water.

    • drfautley says:

      Yes, I worry a lot about reliability and validity. I think we have a long way to go. If I mention “construct validity” I am usually met with blank faces. Validity and reliability are always trade-offs in assessment, yet until we can really get to grips with what is going on, we’re are going to have music at exam level at 16+ and 18+ which is not fit for purpose. And whether any of this will percolate through to KS3 goodness only knows!

  3. terryloane says:

    I believe you focus on an important issue, aspain2014, when you identify that the wish to ‘stamp out’ subjectivity leads to an emphasis on reliability at the expense of validity in how we report students’ musical achievement to third parties. Because so much can be at stake (both for the student and the school) depending on the difference between, say, a grade A and a grade B, one can understand students, schools and parents demanding to know the objective criteria upon which the marks – and hence the grades – are based. It’s little wonder that those in the ‘national policy’ industry obsess about objectivity/reliability. But we have to have the courage to say that the things that really matter in music (and indeed in many other areas of learning) cannot be assessed objectively. Can we really say objectively whether Oscar Peterson is a better pianist than Mitsuko Uchida? Can we place Beethoven’s nine symphonies in rank order? Can Beethoven’s Eroica symphony possibly be assessed on any objective criteria that existed before it was composed? Of course not. And what’s more, any attempts to find objective criteria and so-called reliability by which musical achievement can be judged will inevitably result in trivialisation, in false reductionism. It is this that surely makes it so difficult for school music teachers to teach/assess music musically. It is this that makes it so difficult for “teachers and pupils [to behave] as musicians, being empowered through creative ownership” (to quote from http://www.sound-connections.org.uk/teach-through-music). You mention the concept of “construct validity” Martin, but we have to have the courage to say that any worthwhile construct related to music-making against which we might test validity is a complex phenomenon (using the word ‘complex’ in its correct scientific meaning) – and complexity science has demonstrated for at least 70 years that complex phenomena cannot be understood through simplistic numerical measurement. It really is a pity that those who believe that they can assess worthwhile human activities in objective, ‘reliable’ numerical terms seem to know so little about complexity science and the limitations of reductionism when describing living systems.

    I say that music educators will only be free to teach and assess music musically when they ditch the false rigour of the reductionists and accept that important musical judgements can never be expressed in objective numerical terms. Because the constructs are complex, ‘construct validity’ requires that musical achievement be assessed through inter-subjective narrative description/discussion – and this is a long way from GCSE and A-level grades.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s