Why Making Sense Of Mock Results Isn’t Easy

Mock Exams

This article first appeared in HWRK Magazine in November 2021. You can read the article on the HWRK Magazine website here.

“Correlation is not causation”.

I’ve been thinking of using this as a mantra. Far too often, even those of us who are aware of its truth can fall prey to this logical trap. After all, how many times have you tried out a new teaching technique, with students seemingly succeeding as a result, only to find out later on that they only really made progress because of your newfound enthusiasm, rather than the clever new strategy you used?

The key question is, how do we separate the truth from the noise? We need to ask the same question about mock exam results.

As we move into “mock season”, we should be especially mindful of the correlation/causation problem. The whole point of Autumn mocks is to find out what students do or don’t know, what they can or can’t do and to work out what I need to do next, in response to their performance.

Making Valid Inferences is the name of this game and it’s a lot harder than it seems.

Just imagine this: a student answers a “describe” question with a one-line response, when they should be writing a detailed paragraph or two. What should we make of that as their teacher?

  • Have they misunderstood the amount of detail required?
  • Do they have gaps in knowledge?
  • Is their understanding accurate, but shallow?
  • Did they merely guess the answer correctly without actually knowing it?
  • Is their response just a regurgitation from a revision sheet?

Without an accurate answer to these questions (and more besides), our next move may not have any impact. So what should we do?

Well, let’s look at the responses from the whole class:

  • Do many other students struggle similarly on that question?
  • Are other students having issues with that same exam “skill”, e.g. do they evaluate instead of describe?
  • Did they all run out of time?

Or is it actually more complex than that?

Sometimes there’s a mixed response from different students across the class. Do students from one group perform better or worse than others? A change in seating arrangements might help. But then again, it might not.

Maybe it was the weather that day. Did a wasp fly into the room during your explanation? Were your students in a bit of a rush after being late from PE? Was there a funny smell from the farmer’s field next door that students kept getting distracted by? Did you (without a hint of irony) forget to set a recall homework task on the topic where they underperformed?

In other words, did the problem occur during the teaching, rather than during the exam?

A Question Level Analysis can be helpful, but it won’t always provide the answers that we as teachers need. A good QLA can still only give you a limited amount of information. The information you actually need often comes from your memory of teaching that topic at the time.

What was it that helped or hindered your teaching? This might be a resource issue, a timetabling one, a staffing conundrum, or something on a whole school level, largely beyond the control of the class teacher or Head of Department.

It might even be that your own knowledge just wasn’t strong enough on that topic. That’s an uncomfortable thought, isn’t it? Well, it shouldn’t be. And we can address it without stigma, shame or professional embarrassment. In fact, I’d argue that if we are teaching a challenging curriculum, then from time to time we should fully expect it and actually embrace it in our practice, both individually and as a department. Think back to when you taught that topic: how strong was your subject-knowledge? And are you the best judge of that?

To tackle post-mock issues then should be a collaborative effort, not siloed off for a Head of Department or a Key Stage Coordinator to deal with alone. As a departmental team, it is worth discussing not just “how well did they answer question 8”? but also “how well did we teach the students to be able to answer question 8?” By posing the question in this way, we are much less likely to make assumptions about the student’s answer and much more likely to find the true reason for their response. We should discuss and model our own in-class explanations, how we scaffold and how we assess as we teach, checking for misconceptions and encouraging detail and nuance in students’ responses.

Having these discussions also stops us from letting ourselves off the hook. Much of a student’s attainment is down to things that occur beyond the walls of our classrooms and this is why holding teachers solely accountable for exam results is highly problematic. But we are responsible for how we teach and this impacts student responses in exams in arguably the most significant way. If we have taught it well, the students will typically perform well in assessments.

There are schools across the country, whose cohorts are classed as “disadvantaged” in various ways, but who also routinely outperform other schools whose students “have it easier” (at least on paper). This comes down to the teaching.

As Dylan Wiliam puts it, “Every teacher needs to improve, not because they are not good enough, but because they can be even better.”

So, scrap “correlation is not causation”. My mantra should simply be “Keep improving my teaching”. Everything else is just noise.

You can read more posts like this in HWRK Magazine | The Essential Magazine For Teachers

Giving Effective Feedback

Giving Effective Feedback

Giving effective feedback is a balancing act

My students are about to receive their mock results. For some, this will be a time for them to feel relieved that their efforts so far have paid off. For others, they won’t be happy with their result. Ultimately though, the result itself doesn’t really matter. It’s how my students respond to their result that counts. The hope is that my students will find that balance between fear of failure and over-confidence, to best prepare them for the final exams. In this post, I explain the methods used to ensure that my students respond positively, so that they will achieve their desired result in the future. Giving effective feedback is a tricky business and the stakes are too high for us to do it badly.

Why target setting is priority number 2

As teachers, we constantly set targets, whether short or long-term, aspirational or realistic. Target setting is absolutely necessary, but it must be well-informed and fully explained. Otherwise, your students may not understand those targets immediately.

In many cases, my own students have seen their own targets as too high, too low, or completely arbitrary, before the targets are explained. If I didn’t explain the targets to them, then they risk putting insufficient effort in, to achieve their target. The explanation, though, must contain the ‘bigger picture’; this is priority number 1. More on that in a moment.

Students’ lack of engagement with targets also seems to be caused by their own perceptions of themselves as learners. They often see themselves as an “A grade” student, for example. This makes it harder for them to come to terms with any grade that doesn’t fit with that label. Following a positive result, they can then become lazy, thinking it’s in the bag. A negative result can leave students thinking it can’t be done. It’s vital then, that we spend time, before giving feedback, to help students understand what they should be looking to achieve, both in the short and long-term. They need to know and be constantly reminded that ‘progress‘ is not linear and that their path to success will not be a straight one.

Students need to see the bigger picture

One exam result can seem like the entire picture to some students. So, in order for targets to be meaningful to your students, they need to understand their own situation. By this, I mean that your students need to be able to see what their current level of achievement looks like, compared to their past achievements. Have they dipped? Plateaued? Accelerated? Where is it going?

They should also be made aware of how far a student like them should be expected to achieve by the end of the course. I often cite examples of students from previous years, who have achieved similar mock results, but have then gone on to have even greater success when they have followed a specific plan. I then share that plan, breaking it down into practical steps, which when followed, led to my previous student achieving the desired result.

By making the steps simple, my current students are able to see further progress as realistic. This provides them with the motivation required to increase performance in preparation for the exam. Because the feedback conversation is focused on future achievement, rather than past failure, my students’ mindset is far more receptive and they tend to react more positively.

Students need to feel supported

Many students will know that a poor result is their ‘fault’, but guilt and remorse will only make them dwell on negatives. This distracts from the positives and creates a barrier to forming a solution-focused mindset. So, ensure you are giving effective feedback by using as many comments as possible about what your students have achieved. By beginning the feedback conversation in this way (and feedback must be a conversation, not just one-way) your students will be encouraged to feel as though they have a platform to build upon for future success. They will also see you as being on their side, rather than just being there to find fault.

Many successful schools use the “What Went Well / Even Better If” structure to ensure positive feedback. Here, students are left in no doubt that their successes, no matter how limited, have been recognised and rewarded on some level.

Top Tip: A good way to enhance the WWW/EBI system is to share with the whole class a range of WWW comments that you have given to the group. This then provides students with concrete, achievable examples that they can strive to emulate in future assessments.

Preparing students to receive feedback

This week I’ll be giving my students a brief questionnaire to fill out before they are able to access their results. The purpose of the questionnaire is twofold. Firstly, I aim to prime the students with as much positive-mindset thinking as possible, so that their result will be seen as just one step on the way to future success. I want to build resilient learners. Secondly, I want the students to be able to see what practical steps they can put into place, to get them from where they are to where they need to be.

Here are the questions I’ll be asking:

  1. What do you stand to gain from success in this subject?
  2. What is your end-of-course target?
  3. What practical steps did you take to move towards your mock exam target?
  4. Which of those practical steps paid off?
  5. What was your target for the mock exam?
  6. If your two targets are different, then explain why.
  7. Which practical steps would you change or not use again? Explain your reasons.
  8. How close do you think you will be to your target?
  9. If you achieve your mock target, how will you react? Why?
  10. If you don’t achieve your mock target, how will you react? Why?
  11. If you could go back in time and give advice to yourself three months ago, what advice would you give?

I may change the wording of the questions, or even add/remove some of them. However, what I want to do is to create a dialogue with each student about their own journey. The questions are really just conversation-starters.

Planning your next steps

After giving feedback on the mock exams, it’s crucial that you put a plan in place to ensure that every single student can be monitored and so that their performance on exam day is not left to chance. The plan should be specific, realistic and time-bound if it is to work. But most importantly, the onus should be on the students to solve the problem. All you are doing is giving them a blueprint to follow and dates by which you will measure their success on agreed criteria. Your role is an advisory one. You certainly shouldn’t be expected to re-teach content, especially if your students are perfectly capable of independent learning!

Steps you can put in place:

  • Students should respond to feedback as early as possible – create improved answers or redo the mock exam from scratch.
  • Set aside specific times for on-to-one conversations with each student (if logistically possible). This should happen as soon as possible.
  • Share results with colleagues in other departments and the Head of Year to see if there is an issue beyond your subject.
  • Students create an action plan for the final exams: exam dates, when they will begin revising, successful revision methods, when they will be assessed throughout the revision period to see if it’s working.
  • Book another one-to-one for 6 weeks time to see how students have got on individually. Did they bother to stick to the plan? Where’s the evidence? Did it work? How do they know? What do they now need to focus on? Is parental involvement necessary at this point?

Finally…

Don’t judge yourself as a teacher, according to the exam results in front of you. There’s a good chance that you weren’t in control of more than half of the factors that affected your students’ performances on the day.

Besides, by now giving effective feedback, you will make a huge difference to your students.

You can be proud of that.

Follow me on Twitter and now on Pinterest too. 

And don’t forget to click on the SHARE buttons below!

%d bloggers like this: