What is “Black Box Thinking”?
Black Box Thinking is a philosophy which allows learning to emerge from mistakes. The phrase was coined by Matthew Syed in his excellent book of the same title, where he examines performance and critical self-evaluation in sport, aviation, politics and many other fields. He took the term from the “black box” flight recorders fitted to aircraft, which contain vast amounts of data, to be used to inform future improvements, especially following the poor performance of human beings, the failure of systems and procedures, unexpected events and even complete disasters.
How does black box thinking apply in education?
In education, just as in aviation, we continually train ourselves and others, to help ensure consistently high performance. But despite the amount of time put into these areas, students still sometimes underperform in exams. Schools and inspection bodies collect this data, which contains a wealth of meaningful information to guide current and future performance. But I’m not certain that we use this information effectively. After all, which information should we act on? How should we act on it? What culture should we create around the data we collect?
When teaching doesn’t work…
A few years ago, Steve, a friend of mine working in another school called me on A Level Results Day. He was in shock. For the last few years, his students had achieved excellent exam results and he was considered by many to be an outstanding teacher. This year, however, a number of his students had “failed”. By “failed”, he meant that they had passed, but had significantly dipped below their expected grades. In his post-results analysis that he had to present to the Headteacher in September, Steve was required to account for this dip, despite submitting much higher predictions only two months earlier. How could he have got it so wrong?
In essence, he had assumed that because he had always been right about his students in the past, he was able to draw similar conclusions about his current students. Only this time, he was looking at the wrong data or at least interpreting it in the wrong way. Steve’s current students were not in any way “weaker” than in previous years. Nor had his teaching changed much. But he HAD missed one crucial point. The STUDENTS were different. He had forgotten to take this into account and subconsciously had inferred that the data he had used effectively to predict this year’s results was just as relevant for this year’s students. Steve was wrong.
When the “data” doesn’t add up…
We are all familiar with the use of assessment results to inform our understanding of how students progress towards their targets. However, those results do not “measure progress“. They are a proxy, something which may indicate progress but which is not synonymous with it. Steve believed his assessment procedures to be rigorous. He used a range of assessment questions from the exam board’s past papers. He himself was a seasoned examiner, so felt competent to judge the quality of student responses. But he was ignoring something crucial. Steve focused entirely on improving the techniques used in his students’ answers to exam questions. It didn’t matter in the end.
Steve recalled some of the papers from the exam board to see what had gone wrong. He assumed that the students had ignored the techniques he had taught them. How could they have forgotten the special mnemonics they had constructed together? Had they not written using PEE paragraphs? Did they follow up each of their ideas with a brief evaluation of it? Did their conclusions not follow the highly prescribed formula he had repeated time and time again?
The papers showed Steve what had really happened. The students didn’t know the content. As much as they had tried to structure their writing, they just didn’t know the answers. Steve was expecting a deep evaluation of quotes and had taught the students how to go about it, focusing on multiple interpretations of keywords and phrases. But the students hadn’t memorised the quotes. The hypothetical case studies presented in the exam were supposed to trigger students to consider socio-economic theories, court cases and historical events. But the students hadn’t understood the ones they were supposed to write about on exam day. They only understood the ones they were tested on in class.
Despite the failings of some, a small number of students had performed well. But instead of patting himself on the back, Steve just felt bewildered. How had they performed well when others hadn’t? After all, they were all in the same class, had access to the same resources and were assessed in the same way throughout the course. Their results should all be roughly on par.
Or so Steve thought.
Why do some students fail?
Students succeed and fail due to a multitude of factors. They may lack knowledge and understanding of a theory, method or event. They might not have memorised the information they need. Their skills of analysis and evaluation may undermine the quality of their responses. The structure of their answers could make it difficult to demonstrate their mastery of the question. Steve considered all of these possibilities but was still at a loss to explain the underperformance. The truth was, that these weren’t the only factors that were at play.
Let’s look at why three particular students failed:
Student A had recently been dealing with a bereavement of a close family member. This had taken its toll on the student, who had performed well up to that point. In the final run up to the exam, Steve had believed that this student would cope well with study leave, having demonstrated for almost two years that he could work well independently. However, in this instance he was wrong. The student was unable to focus at home, in the way he could at school, in part because he was constantly surrounded by distractions relating to the passing of his relative. Whilst his bereavement would not be much easier at school, at least he may have found some space to concentrate a little better, or for longer periods, enabling him to perform better than he eventually did on exam day.
Student B had a poor track record regarding her attendance. But despite this, she still managed to perform well in her assessed essays. As it turned out, she was close friends with a student who had written the same essays in the previous year. She re-worded these essays and in some cases had even memorised them by rote, for closed-book timed assessments in class. By doing so, she evaded the attention of staff who were actively looking for students requiring intervention. Since her grades were good, they didn’t consider her to be at risk of failing. Her problem though, was that in the exam she was not able to adapt those memorised answers when the question changed ever so slightly. She pulled the wool over many eyes, including Steve’s and failed outright.
Student C was a high performer. At GCSE she had achieved all A* and A grades and had done so with little visible effort. Throughout A Level, however, she had not always enjoyed the same level of success. Essay grades ranged from A* to C. Steve had been hot on the case with this student and had accurately identified where marks were being gained and lost. He gave thoroughly detailed feedback to the student, who was able to redraft the essays to an excellent standard, following the advice he gave. But on the day of the exam, her marks were inconsistent across the paper. Why had she performed so well in some areas, but so badly in others? As it turned out, the detailed feedback had made no difference. Why? The student hadn’t had to think hard enough for herself as to how to improve. In the end, her highest marks came from the topics where Steve’s feedback was much more limited in detail (despite the formative essays being of an equally low quality to others where feedback was detailed). In this instance, the student had performed badly overall because she hadn’t become independent enough. She was still overly reliant on the teacher to help her to improve, even in the final weeks and days before the exam.
Action points for “Black Box” teachers
- Assess regularly. Balance scheduled tests with unscheduled ones to accurately identify true levels of understanding.
- Use rigorous assessment methods (past paper questions, etc)
- Give feedback that strikes the right balance between being too detailed and not detailed enough
- Create and maintain a ‘culture’ of student independence
- Reward resilience and genuine effort, rather than high attainment alone
- Test knowledge and understanding in creative ways, to avoid “scripted” responses
- Formalise how you will act on the data you collect. Checklists are a time-efficient way of developing set procedures. (More on this in a future post!)
I would love to hear about your use of “Black Box” thinking in education.
Just leave a comment on this post or send me a tweet (@guruteaching) and I’ll get right back to you.
And don’t forget to LIKE and SHARE!