Tracking and Assessment in Music Education

When considering assessment, it is customary in academic circles to try to be precise concerning the uses and purposes of assessment. Sadly, this task is a mammoth one, and there is considerable terminology slippage, not only from research to practice, but also within these domains too. Indeed, as Newton (2007 p.149) observed:

  1. the term ‘assessment purpose’ can be interpreted in a variety of different ways
  2. the uses to which assessment results are put are often categorized misleadingly.

…which is all a little unhelpful! I am raising this as an issue as I am aware that my blogs are read by students of music education at all levels from undergrad to doctoral, as well as by serving teachers, and I would hate to mislead by being imprecise with terminologies! So in talking about tracking, which I will get on to in a minute, I am talking about a use for assessment data.

So, what is tracking, and why is it now figuring in a blog all by itself? Of all the uses to which assessment data is put, tracking seems to be yet another which causes headaches for both teachers and SLTs. Tracking is when assessment data (of any sort) is used to monitor progress (for definitions of terms, see previous blogs, especially this one) made by individuals, to ensure they are on course for a good final grade from their studies. This is all fine, and seems a logical thing to do, so why might it be problematic? I have described before (Fautley, 2012) how there have been problems associated with enforced linearity of progress, and how this has made life difficult for music teachers. Tracking pupil progress therefore can contain that problem, as tracking is often not only about monitoring progress, but can also have a predictive element.

I have also detailed before the thinking that I have done over many years with regards to tracking progression using radar-charts, and in this blog I don’t want to repeat myself and deal with the practicalities of tracking, but instead discuss some of the conceptualisation.

Firstly, I think it is important problematise what the functions of tracking actually are. These seem to me to be:

  • Monitoring attainment
  • Monitoring attainment across a range of topics
  • Monitoring attainment in a single topic
  • Monitoring progress
  • Monitoring progression (ie speed at which progress is being made)
  • Predicting attainment
  • Auditing learning
  • Auditing teaching
  • Inter cohort comparison
  • Intra cohort comparison
  • Monitoring teaching
  • Monitoring teachers

This is quote a complex list, and as with many other areas, I want to invoke Boud’s (2000) notion of assessments having to do “double duty”, in that one thing is asked to have and perform multiple functionality. In this list many duties are being asked of tracking. But which are important to the classroom music teacher?

I think one of the first questions that should be asked of and by the teacher is “what uses will you/I be making of tracking data?”. The obvious answer, “to track progress”, is actually a little more complex than it seems at first glance! To investigate this we need to dig a little into what is going on. To keep things simple (this is a blog after all!) I want to label and describe only two types of tracking (sorry!). These are:

  • General tracking
  • Construct-specific tracking

What I call “general tracking” is what the National Curriculum levels ended up being used for, especially with the use of sub-levels (yes, those things that don’t officially exist, and were made up by teachers more-or-less separately in every school in the country!). What is taking place in general tracking is that a regular summative assessment grading, on a common scale, was being employed, and pupils progression along it was monitored. Represented pictorially, this looks something like this:

Figure 1: General Tracking


What I am referring to as ‘construct specific’ tracking, on the other hand, is where a singular component of musical endeavour is monitored, graded, and reported upon. It is this that gives rise to the radar charts I mentioned in an earlier blog, and are reproduced here for simplicity:

Figure 2: Radar Charts

2 3

Now, what I am worrying about in this regard is the issue of intended learning objectives and how they translate into assessment data which are then used for tracking. As the influential American assessment expert W James Popham has noted:

…the meaningfulness of criterion-referenced interpretations is directly linked to the clarity with which the assessed curricular aim is delineated. Clearly described curricular aims can yield crisp, understandable criterion-referenced interpretations. Ambiguously defined curricular aims are certain to yield fuzzy criterion-referenced interpretations of little utility. (Popham, 2011 p.47)

Popham refers to ‘curricular aims’ here, whereas I would prefer to say “intended learning statements”, but the outworking is the same. And it is here that I worry about how effective tracking can be, because what I think is really important is that classroom music teachers think about the question:

  • what do I want the pupils to learn?

before we get to the question:

  • how do I want to assess it?

So, this all links back to curriculum planning, and being clear in our own minds with regards to what is being learned, because without that we will struggle with construct-specific tracking, especially if we wish to avoid Popham’s “…fuzzy criterion-referenced interpretations of little utility”!

To put this visually, what I am worrying about here is when I see a KS3 (say) curriculum where the intended learning hasn’t been though about at a macro level, only in the micro. Again, represented pictorially, this would mean that the intended learning is only delineated in terms of the yellow arrows in figure 3:

Figure 3: Curriculum planning and intended learning

Macro and micro

Yet again we are taken back to the point that it is thinking about curriculum which is a really important aspect of assessment planning.

I appreciate that this blog has been exceedingly shallow, and has danced across the surface of some very deep issues without getting its feet wet! But I hope that there is enough detail in here to help teachers thinking about these issues to explore their own issues at their own level, and that it might promote some thinking about uses and purposes of assessment in music education.


Boud, D. (2000) Sustainable Assessment: rethinking assessment for the learning society. Studies in Continuing Education, 22, 2, 151-67.

Fautley, M. (2012) ‘Assessment issues within National Curriculum music in the lower secondary school in England. ‘. In Brophy, T. S. & Lehmann-Wermser, A. (Eds), Proceedings of The Third International Symposium on Assessment in Music Education, Chicago, IL, GIA Publications.

Newton, P. E. (2007) Clarifying the purposes of educational assessment. Assessment in Education, 14, 2, 149-70. (Also available at

Popham, W. J. (2011) Classroom assessment : what teachers need to know, 6th Edition. Boston MA, Pearson/Allyn and Bacon.

This entry was posted in Uncategorized and tagged , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s