I don't know if it is the time to be going over the old arguments about the proposals for the new GCSE. Especially as we are busy working out the ins and outs of the new exam, which is in some important ways different from what was originally proposed. Before Christmas, a paper was published by Emma Marsden and Rachel Hawkes, looking back at what was proposed and how the proposals were received. Balancing evidence-informed language policy and pragmatic considerations: Lessons from the MFL GCSE reforms in England. This link takes you to the abstract. To open the full document, use the links that say "Access to Document." It's very readable and I would encourage you to read it.
It puts the case for the proposals very clearly and tries to address what it sees as some misconceptions. It is also clear from the introduction that the new GCSE was seen as a way to reform not just assessment, but teaching. It stresses that the GCSE assessment (whether one likes it or not) heavily shapes
curricula, materials, and pedagogy. So it is important that we continue to engage with the thinking and research behind these proposals.
You can see the arguments for this from both sides. But what it actually comes down to is what happens in the classroom when you try it out for real. Whether it has an impact, pushes us towards changes, and actually works out as intended. Which is the reason why I am writing this post, although I am somewhat conflicted as to whether I want to. So here goes...
The paper makes the case that the vocabulary list in the legacy GCSE was not central to the exam or to teaching. Teachers did not build their curriculum around the GCSE list, and the exam boards did not deliberately design the exams to test knowledge of all the words in rotation over a few years of the lifetime of the exam.
While this is true, it wasn't necessarily a problem. Or rather, the exam boards and teachers didn't turn it into a problem. Teachers taught the words that pupils needed to perform the tasks and topics required by the exam. Without consulting the vocabulary list. In the speaking and writing exam, what mattered was the pupils' ability to use their language to express themselves, rather than testing knowledge of items from a list. Apart from in the translation questions, where knowledge of the specific word in the text was required, and this isn't something that has changed. In the reading and listening exams, certainly for AQA if not for the other exam boards, the key to the texts and questions wasn't a mass of low frequency topic vocabulary. It was words like often, some, always, other, ago. And the focus on exact rendering of word by word language, rather than an answer to a comprehension question was the biggest criticism of these exams. Which seems likely to be exacerbated not alleviated by the proposed "new" approach.
So the paper is right in that the vocabulary list wasn't central to teaching or to the exam. I'm not sure that this was ever a problem. But if you do think that it was absurd that the list of vocabulary wasn't central, and thought it was reasonable to specify all the vocabulary to be learned, then you can see the case for reform. Does this mean that the new vocabulary list should therefore be determined by frequency?
I can see the arguments. The words that get used all the time are worth learning because they get used all the time. Emma and Rachel base their work on research that shows that the very highest frequency words are common to whichever corpus you use. And the words are a good fit for continuing to A Level and for reading authentic resources.
I think this last example is something that we really ought to be exploring. In the 1990s and 2000s when reading for information and pleasure and "adapt the task, not the text" was in vogue, we encouraged pupils to spot cognates and topic words from context. And maybe told them to gloss over the little words in between. Or thought that pupils would pick them up from context eventually. Whereas now, if we concentrated on the little words in between, we would find that the topic words and cognates are just as obvious as they always were, but equipped with the high frequency words, pupils would be much more able to access authentic texts. This is something I'd love to come back to. Although in the debate about reforming the curriculum, the Ofsted Research Review warned us off authentic texts, because of its focus on language as modelling known words and grammar, rather than for comprehension or interest.
But in the classroom, teachers are finding it very hard to adapt to the high frequency vocabulary approach. We are not used to following a list of vocabulary and constructing a syllabus out of it or constructing texts out of it. It's not that we have swapped the old vocabulary list for the new one. It's that we have moved from basing our teaching on the tasks and topics that pupils are required to deal with, to basing our teaching on a list of words to be tested.
We were quite happy teaching the words pupils needed for a topic. We were quite happy teaching pupils the words and structures they needed for the exam tasks. And the thing is, we are still doing that. But with the extra worry of whether we are teaching the right words.
For many, it means no change. Apart from the creeping worry. For others, it means we are incentivised to try to gain an advantage by doing some of the things the new approach favours: paying greater attention to recycling words; rehearsing what pupils can do with words on a list, rather than saying what they want to say; making sure we cover words from a list in any context we can shoehorn them into; making sure we work across topics. Some of these incentives are positive and some are negative. But recycling vocabulary across topics was already a feature of strong teaching under the legacy GCSE.
It also gives rise to absurdities. If we are still teaching topics based around pupils' personal likes and dislikes, do we discourage them from learning words like skating, chess, chicken which may not be on the list? Or do we go against the whole point of the reforms and add more words which aren't on the list? There seems to be a serious mismatch between the tasks and topics and the language we are given. And I still haven't got to the bottom of the mismatch with the grammar. If we have to teach jouer à and jouer de, are there actually enough sports and instruments on the list?
And it raises interesting questions about the notion of difficulty in language learning. Some of the words on the Higher list are cognates. The difference between Higher and Foundation isn't about difficulty. It's about the number of words. It's interesting to think about this and whether we can control and segregate our pupils' exposure to words according to a list.
Even if you were to insist it makes sense for the GCSE to be built around a vocabulary list, wouldn't it make more sense to make sure the list matches the tasks, topics and grammar that pupils will be required to demonstrate? The paper seeks to argue that this question is a misunderstanding of the power of the high frequency vocabulary. But in practice we are finding that we are very much teaching words which are not on the list, in order for pupils to perform in the topics and tasks.
One final spectre on vocabulary that this paper raises, is the idea that the exam boards should be held to the stipulation that they should test the full range of vocabulary on the list over the lifetime of the exam. This risks meaning that the exams quickly exhaust the more obvious topic words, and that the exams look very different from year to year, and start to look very different from the texts in the coursebooks. When I've mentioned this before, representatives of the exam boards have been very puzzled by the idea and told me that it's not something they are contemplating. So I think we can stop worrying about that.
Another issue the paper raises is the Conversation part of the exam and the use of memorised answers.
This is the part of the debate that leaves me most bewildered. I am someone who prides himself on teaching pupils how to develop extended spontaneous answers in speaking and writing. The old Controlled Assessment GCSE which forced us to abandon this in favour of pre learned fancy answers was an absolute disaster for language teaching. The legacy GCSE started to see an end to this. Some teachers were quicker than others to start to make the switch, and rote learned answers had been embedded from earlier in the curriculum. All of this needed to be undone. The breadth of topics and the requirement for interaction of the examiner, responding to the pupils' initial answer, mean that pupils' ability to use their language flexibly and across topics is winning out over the idea of pre learning answers to every possible question on every possible topic.
The paper quotes research saying that a large proportion of pupils still do some pre learning of answers. This isn't the same as saying that pupils learn and regurgitate a fancy answer word for word. It may well be that faced with a high stakes exam, many pupils will want to be well prepared. It may well be that one of the steps towards spontaneity and fluency, is to have pre learned answers to some questions. It may well be that the pre learned answers are adaptable, flexible and not actually even used in the exam.
I have had pupils who thought they were doing the right thing by trying to memorise answers for the speaking exam. And it doesn't mean it goes well! They are focused on word by word regurgitation and recall. The first thing I do is interrupt them and push them away from these answers, and then they can start to talk from their repertoire and routines and make a much better job. The examiner's reports for the legacy GCSE comment on how pupils who can do this are much better equipped than pupils who have attempted to memorise answers. Keeping the Conversation, where the teacher can ask open questions and follow them up with further prompts, was the most important thing we have managed to do in engaging with these new proposals.
The proposed alternative version of "spontaneous speaking" meaning responding to unexpected questions, is one of the trickier parts of the new GCSE. We know from the Role Play in the legacy exam that the unexpected question is tricky. In an exam situation, pupils pick up on a key word from the question and give an answer which may or may not fit the question. It's a short answer, often panicky.
The new exam has four of these questions. And the expectation is for an answer with some development. We are working hard on these. My pupils have a greater familiarity with question words than previously. But we shouldn't underestimate the processing load required to accurately understand the question, formulate a sentence response, attempt to give some extended detail. Some of the questions don't help. "Where is your school?" could induce bafflement in a pupil in an exam, where both interlocutors are sitting in said school. And how to develop that answer on the spot is not clear to me at all. These questions seem designed to catch pupils out, rather than offer a platform for them to develop spontaneous speaking. This vision of "spontaneity" is from the point of view of testing pupils' knowledge, rather than their ability to use the language.
And I think that's the key. The research paper finds plenty of evidence to back up its own point of view. But it is firmly rooted in one vision of language learning. It wants to be a fair test of pupils' knowledge of what they have learned. To specify what should be learned, and to test that knowledge and to some extent the application of that knowledge. A perfectly reasonable point of view. But one that isn't so simple in the classroom when you are dealing with real pupils and real language learning. Is this the problem? That acknowledging that the GCSE (whether one likes it or not) heavily shapes curricula, materials, and pedagogy led to an attempt to deliberately shape what is learned and how it is taught. And for us in the classroom, the experience of being reshaped feels awkward, confusing and sometimes painful.