What makes great music teaching?

I’ve been reading  the Sutton Trust (hereafter ST) report which came out earlier this week entitled “What makes great teaching? Review of the underpinning research”. I am generally supportive of the document, but I have a few niggles about how it applies with regards to music education. It may, of course, be simply that I haven’t allowed enough thinking-time to elapse, or that I may have missed some nuances, but anyway, here are some thoughts on this. (Unreferenced page numbers in this blog will all be from the report)

I think that I worry about defining great teaching as that which has the most effect on pupil test scores:

“it makes sense to judge the effectiveness of teaching from its impact on assessed learning. If the assessments and value-added models available to us are not good enough, we need to improve them. In the meantime we must exercise some caution in interpreting any claims about teaching effectiveness.” P9

The second part of this quotation for me cancels out the utility of the first. I don’t think assessments in music education at the moment are good enough. I think simply saying “improve them” implies an unreformed positivist view of assessment, and assessment in the arts is problematic. It is not easy to uniquely define constructs, and will always involve an element of valorisation and subjectivity. We also have the issue that, as Dylan Wiliam noted: “We start out with the aim of making the important measurable and end up making only the measurable important” (Wiliam, 2001 p. 58). Now I know that the ST report talks about a range of measures, and triangulation being vital, but the nagging music educator in me thinks that dangers lurk here.

Now what I am not saying is that assessment is irrelevant to music education, and as regular readers of this blog will know, I have been working for years to try to develop improving approaches to assessment in music education. But I don’t think there is a simple linear trajectory along the lines of:

 Teaching > Learning >measured by test scores

 But to be fair, the ST report does say:

“In technical terms, we value consequential validity over criterion-related validity. This perspective also allows us to acknowledge that quality teaching is multidimensional: a profile of multiple, independent strengths and weaknesses may be more useful – and a better fit to reality – than a single, unidimensional measure.” P11

And this I am more than happy to concur with. I have long been a fan of consequential validity, but would at the same time would like to point out that we are still having trouble with criterion-related validity in music education! And I’ve been “saving up”, so to speak, for a blog post on construct validity in music ed for a while – that’s still on the back burner.

Which takes me back to the ST report. That teaching should be linked to outcomes is beyond doubt, for me, but at what stage should those outcomes be investigated?

“A key feature of the current review is that we try to limit our attention to well-defined, operationalisable behaviours, skills or knowledge that have been found to be related, with at least some justification for a causal relationship, to measureable, enhanced student outcomes.” P11

Which takes us back to the points I made earlier, but also on p11 we have this:

“There must be some evidence linking the approach with enhanced student outcomes. There is not necessarily any assumption that such outcomes should be limited to academic attainment: whatever is valued in education should count.” P11

Which I can agree with, but sometimes I feel that this report, the work of Coe et al (and the et al is important here) sometimes reveals some inner tensions between a relentless positivist spin on measurement, and a nagging doubt that ‘something else’ might be going on too. Now I may be wrong on this, and I am pleased that such things are there, but I worry that they may be missed by those audiences seeking confirmation bias of their own linear positivist instruction=test scores ideas.

There is more on this theme later:

“We have already made clear that our definition of effective teaching is that which leads to enhanced student outcomes. An important corollary is that our criterion measure, against which we should validate all other sources of evidence about effectiveness (such as from lesson observation, student ratings, etc.) must always be anchored in direct evidence of valued learning outcomes.

We need to stress that this does not mean that we have to privilege current testing regimes and value-added models. Existing measures and models may fall well short of what we need here. However, success needs to be defined not in terms of teacher mastery of new strategies or the demonstration of preferred behaviours, but in terms of the impact that changed practice has on valued outcomes. Because teachers work in such varied contexts, there can be no guarantee that any specific approach to teaching will have the desired outcomes for students.” P38-9

And again, I feel the strong hand of authorship-by-committee here – and a good thing too! But as I said before, I wholeheartedly agree with “Existing measures and models may fall well short…”. I feel that in music education, especially at KS3, they do.

Where I do want to get off the bus, however, is with the quotation from that lynch-pin of nouveau-Victorian educational thinking, Daniel Willingham: “Memory is the residue of thought” p24. This has long troubled me with regards to music ed. Those who know me personally will know that I am recovering from a finger injury (I’ll spare you the details) where I have had to have the tip of my LH index finger sown back on. I am currently unable to play any of the instruments on which I claim some proficiency as I have yet to regain the feeling in the finger, other than ‘ow, it hurts’! Now I know that I remember perfectly well how to play all these instruments, but when the time comes that I feel able to, I know that memory will not be enough. String players will understand – I bet my intonation will be awful! This, applied to music performance generally, is part of my worry with the Willingham quote. There is more to musical performance than memory alone, there is instrumental technique too. I know that this rests in memory, to some extent, but I think there is more to it than that (Cobb (1999) is interesting on this too).

On a general point I do feel that at times the underlying assumptions of the authors creep through. For example, having said that “there are a number of reviews available with quite different claims about what characteristics of teacher practice are associated with improved outcomes” on p11, which is fair enough, they then go on to say: “Enthusiasm for ‘discovery learning’ is not supported by research evidence, which broadly favours direct instruction” p23. Now I’m not going to defend discovery learning too much, that’s not the point, the issue is they drop in ‘direct instruction’, another top-of-the-pops must-have for the nouveau-Victorians, without seemingly subjecting it to the same evidentiary requirements they have for things they don’t like. I’m not going to diss DI too much, as I am well aware that I’ll upset the storm-troopers of the twittersphere if I write even a tiny bit pejoratively about one of their shibboleths, but for me in music education, instruction (which always make me bristle) has to allow for pupils to have a go at doing musically, which requires technique development over time. Teaching can help with this, as it is reciprocal, and allows the teacher to help the learner improve. But I do not want a return to lessons about music, which are inherently unmusical. I understand that lessons about, say geography, may often have to be that, i.e. about geography, but I think music lessons should be musical.

Where I do fully concur with the ST is here:

“…the need for a high level of assessment and data skills among school leaders. The ability to identify and source ‘high-quality’ assessments, to integrate multiple sources of information, applying appropriate weight and caution to each, and to interpret the various measures validly, is a non-trivial demand.” P46

YES! And SLTs should stop treating music as being congruent with STEM subjects for assessment purposes. They’re not. As Private Eye says, see my blogs passim!

And so I broadly welcome the Sutton Trust report, but I worry that in its focus on what it knows about – ie not music – it may prove yet another stick with which to beat the music department. We need to be clear on what we can and what we cannot make better given our current assessment tools, and blaming music teachers for only having a hammer won’t help!

Refs:

Cobb, P. (1999) ‘Where is the Mind?’. In Murphy, P. (Ed), Learner, Learning and Assessment, pp. 135-50. London, Paul Chapman.

Wiliam, D. (2001) What is wrong with our educational assessment and what can be done about it? Education Review, 15, 1, 57-62.

Advertisements
This entry was posted in Uncategorized and tagged , , , . Bookmark the permalink.

2 Responses to What makes great music teaching?

  1. Phil Taylor (@philtaylor7) says:

    As you say Martin, there is much to be welcomed in this report, but I was also left with some nagging concerns. Perhaps I need to read it again and re-consider, but these were things that occurred to me:
    – Despite some initial caveats about context and while stressing the importance of subject knowledge, I don’t think the report acknowledges that different subjects, as you point out for music, may have their own pedagogies. Of course, there are some general features of great teaching but context really does matter including, as well as subjects and curricula, school climate, quality of relationships, and, yes, learner attributes (age, prior learning, interests, responses, not ‘styles’ (heaven forfend!)) etc. The universal frameworks for instruction offered appear out of kilter with the earlier caution regarding generalisation, which leads me to the next point…
    – Much of the research cited in support of great teaching is drawn from the school/teacher effectiveness tradition. This tends to privilege the narrow outcome measures of attainment that we are encouraged to question, challenge and look beyond earlier in the report. School/teacher effectiveness measurement based on standardised assessments and value-added modelling is a zero-sum endeavour. If one school/teacher demonstrates improvement it will be at another’s expense, constrained by the normal distribution or snake-plot of grades/rankings. I see the encouragement to seek wider learning outcomes and appropriate ways to assess them as a key strength of the report, but this does not seem to follow through into the preferred evidence base. This leads to the next point…
    – The report seems to favour research of the ‘what-works/what works best’ kind, in pursuit of ‘high quality’, despite again offering caveats regarding evidence and interpretation. So Husbands and Pearce (2012) are taken to task in their support of pupil voice as an effective aspect of pedagogy for failing to provide ‘robust evidence’ that meets (undefined) ‘basic quality standards’ but they clearly think they have provided just that (see https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/329746/what-makes-great-pedagogy-nine-claims-from-research.pdf, p.3). So like so much educational research this seems to boil down to paradigm arguments (‘my research is better than yours’) but leaves the practitioner none the wiser. This links to the next point…
    – Research findings from cognitive psychology are also cited in support of approaches such as direct instruction, as you have pointed out. These are often based on highly specific experiments on such things as ‘the nature of learning’ and ‘role of memory’ but presented in generalised terms as though they apply to all learners, at any age, in any subject. In other words, context is again written out of the evidence. Strategies such as direct instruction hold many meanings (at least five according to Rosenshine, 2008, http://www.centerii.org/search/Resources%5CFiveDirectInstruct.pdf) and this fascinating and entertaining variety can be easily seen on YouTube! Finally…
    – The report says (p.43): ‘How teaching leads to learning is undoubtedly very complex. It may be that teaching will always be more of an art than a science, and that attempts to reduce it to a set of component parts will always fail. If that is the case then it is simply a free-for-all: no advice about how to teach can claim a basis in evidence.’ I fully agree with the first sentence but don’t accept the last. Great teachers gather evidence continually and use it to make context-sensitive professional judgements in the interests of their learners; far from a free-for-all. Sometimes this will be informed by evidence from wider research, which will always need re-contextualising. Great teachers are experts in their own classroom, because they know, understand, live and breathe their subject, their context and their learners. They do their best to secure the best possible measured outcomes for learners within the constraints of a competitive system that rations grades.

  2. Pingback: In search of good music education | jfin107

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s