A very long time ago, in the early 1970s, I competed at national level, even dancing in the finals at the iconic Blackpool Tower ballroom, and can share some insights from my ‘lived experience’ of dance competitions. There happened to be a ballroom dancing school in the village I grew up in, famous for its woollen mill and for being cut off by deep snowfall in winter. The dance teachers frequently appeared on Come Dancing, the forerunner of Strictly Come Dancing. (There’s a link at the end to find out more about Come Dancing). The Black Dyke Mill band practiced in the schoolroom over the road from the house I grew up in and the Brighouse and Rastrick Band in rooms at my secondary school. In competitions we danced to ‘big band’ music. I have brass band music as well as ballroom dancing ‘in my bones’. As a result I know a thing or two about posture, footwork, and musicality, enough to be able to join in the marking when Strictly is on TV. Maybe this is where my interest in assessment and feedback came from!
Some thoughts about Assessment and Feedback in Strictly Come Dancing.
What are the celebrity dancers being judged on? What is being assessed?
The fundamentals – posture, footwork and musicality or ‘performance’ come up a lot. BUT there are no explicit criteria (unlike the BU Generic Assessment Criteria), no clear pass or fail thresholds. This has always seemed mysterious to me – but hey, it’s only a TV programme!
Is it possible to compare very different dances?
Some dances have the reputation of being notoriously harder than others, for example the rumba and the samba, but some contestants excel at them. Are some assignments perceived to be harder than others? The implicit basic criteria about posture, steps and musicality apply to all dances. There are no stated Intended Learning Outcomes in the show, but we could propose some, such as demonstrate the fundamental principles of the dance; perform a routine without mistake; interpret the style of the dance effectively to music. You can probably think of others.
What is the quality of feedback?
One of the judges tends to give simple statements as feedback – “Fab-u-lous” or “I loved it” or “Awful darling”, without constructive comments about the reasoning behind this, or how or what to improve. The chair of the judges frequently provides a detailed critique of the aspects that impressed and the aspects to improve, particularly the technical aspects of posture and footwork. This can be applied to later dances, though each dance has distinct techniques and style. Another judge provides enthusiastic encouragement but little technical critique, and another often refers to his own recent experience as a pro-dancer in the programme. Which judge are you? This is far from the balanced statements contained in the BU Generic Assessment criteria and elaborated by the marker. The formative feedback is provided throughout the week by the professional dance partners as they teach and rehearse the dance. The couples often say they have taken the judges’ comments seriously and integrated the feedback into the training.
The public vote
Viewers are invited to participate in the decision by voting online or by phone. The judges’ ranking on the scoreboard and the public ranking are combined and are of equal value. It’s not easy to predict who’ll be in the dance off when the judges’ scores and public vote scores are combined. Is this like having 2 elements of assessment each worth 50%, one involving peer feedback? The couples must impress the judges and the public. Only the resulting changes to the scoreboard are revealed. This aspect is not transparent. The criteria used by members of the public is a complete mystery. Is it skill? Is it comedy value? Is it attractiveness? Are they voting for a popular professional dancer rather than the celebrity partner? Is it a disadvantage to be paired with a new professional partner who hasn’t developed a popular following?
Are the pro-dancers (who choreograph and teach the routines) being judged too?
Sometimes we hear “the choreography let you down”, “not enough content”. Pro partners will often apologise and offer to ‘take the blame’, but it does not change the marks. This might be similar to situations where the assignment has not been designed to meet the Intended Learning Outcomes (ILOs), or the assignment content failed to meet all the ILOS. This links with the literature on alignment.
Does the marking get ‘harder’ as the series progresses?
Much in the same way as the expectation for performance from Levels 4 to 5 to 6 to 7, differs, the competitors are expected to improve from week to week. “This is week 7, we are expecting more of you than in week 4”. You could replace ‘week’ with ‘Level’.
In conclusion, how does Strictly align with Sambell’s assessment for learning?
Quite well on most of the dimensions.
- The intensive training to learn the dances is certainly rich in informal feedback.
- There is a mix of summative and formative assessment.
- The task of learning a dance is authentic and complex.
- The celebrity dancers learn to self-correct and to evaluate their own progress by watching the professional dancers perform group routines, watching the other competing couples watching recordings of their own performance in the competition, and seeing themselves in the dance studio mirrors during training.
- In training, feedback takes the form of a dialogue between the professional and celebrity dancer.
- But, the summative formal feedback (and mark) from the judges is variable and the public vote lacks transparency.
The image is of some of my ballroom dancing dresses discovered in my parent’s attic.
Dedicated to Kip Jones. His arts-based and narrative approaches to education and research will always stay with me. The projects we worked on together are some of the highlights of my time at BU.
Anne Quinney, Principal Lecturer, FLIE.