Saturday, 14 February 2026

Reflecting with Pupils on Beliefs about Language Learning. Is there a right answer?

 In a recent post, I shared some work my Year 9s wrote in their writing assessment after the first half term's work.



I enjoyed the confidence and individuality with which they expressed themselves in French, taking risks and saying things they wanted to say.

Then on the INSET day in January, we were lucky enough to have a day with David Didau, doing whole school training on teaching and learning. He gave a very strong message that we should not be allowing (let alone encouraging) pupils to create work that contained mistakes. If we did this, we were going too fast too soon. We should be spending our time at sentence level until pupils were perfect. If we allowed pupils to work beyond the level of carefully practised accuracy, we would be compounding error and poor performance. And he was talking about pupils writing in English, let alone in a new language!

This made me think of examples like this.



Was I right to encourage the pupil to be proud of what they could do with their French? You can see there are errors in spelling, gender, verb endings and vocabulary. Was my focus on creativity, coherence in developing ideas, and personal expression the wrong focus? Did the very demands of creativity and thinking up what to say distract pupils from the important task of forming accurate sentences? Did allowing pupils to express themselves inevitably invite them to try to say things that were bound to be wrong? What should I do?

I decided that our next unit (Jobs and Future Plans) did lend itself to less free-flowing imaginative writing, and could be an opportunity to accommodate more of a focus on accuracy. I made this an explicit discussion point with the class, so they could reflect on the balance of expression versus accuracy in their work.

After an initial piece of work in their booklets, I shared the work of four pupils on the board.

"Evelyn" had written something that was highly accurate, by concentrating on putting together a paragraph built out of the French she was learning. 



Not one word of it was true. It was simply an exercise (on this occasion) of using the language for the sake of practising the language she had learned. David Didau would have loved it. And it also fits with the fashionable idea of language learning as carefully constructed vocabulary and grammar exercises, where showing you know what something "means" is more important than having something meaningful to communicate. We appreciated Evelyn's accuracy and her approach to the work, making something tasty out of the ingredients she had.

Then we looked at a different approach.



"Henry" does not start with his ingredients and see what he can make out of them. Henry starts with things he wants to say. His personal dreams, his current obsessions, and things he thinks are important. He tries to say them using French he knows, but saying what he wants to say is more important to him than building perfect sentences.

Then we have "Dylan". Here is Dylan's previous piece of writing from the Going to the Beach unit.



And here is what he wrote about Jobs:



You can see his total disregard for caring about accuracy. His focus is entirely on saying things he wants to say for his own amusement. Usually about crabs.

Now Kirsty. Kirsty also sets out to write what she wants to say. She often goes beyond the French we have learned in this and in other topics. She often makes mistakes because she is pushing at the boundaries of her knowledge.



As a class we discussed these 4 examples (and others). We discussed it from the point of view of doing well in an exam. And from the point of view of being a language-learner. I think that at this point we arrived at some sort of consensus that the key was self-awareness and deliberate decisions around taking risk. Which is born out in some of the comments you can already see on the pupils' work.

We decided to tweak the parameters of the task. Each pupil would write a paragraph in test conditions, using only French they knew. It wouldn't be about them, personally. It would be about "Being a teacher" and it specified that they should use the opinions, reasons, conjunctions and future expressions we have been working on. And it should aim to be a "Perfect Paragraph."

Let's see what we got...

Here's Kirsty.



It's a more boring version of what she was trying to write. It still goes beyond what others in the class know and can do. And yes, it is more accurate than her work usually is.

Here's Ruby's Perfect Paragraph alongside her Going to the Beach paragraph so you can appreciate the difference.




And here's Dylan's Perfect Paragraph. Disciplined and sticking strictly to showing he can accurately use the French we've learned in this unit.



Yes. I know. But it is noticeably more accurate!

When it got to the writing assessment, what should I do? Should I tell them it is being marked for accuracy, an exercise in showing they had memorised the language for this unit? Or is it being marked for showing me they can express themselves and develop an idea coherently?

I left it up to them. I told them I already had the evidence of what their work looked like so far with both focusses. And now I wanted to see how what we discussed came through in their own individual work and in their awareness as a language learner.

Who do you want to see first? Here's Jess who wrote the Crossy Roady piece at the top of this post:



Here's Henry.



Here's Ruby:



And I suppose you want to see Dylan. Here's Dylan.



Have they got more accurate? Has there been a retreat from wild and reckless indulgence? Have they become more restrained and boring? Is their way of working individual and innate, or does it vary depending on the task? Is there a best balance between accuracy and expression? And does it all depend if we are talking about testing knowledge or if we are talking about language learning?

And we will look at Kirsty's. But I'm leaving that to the end because I think I have made my mind up about the apparent exam success/language learning dichotomy. First, here's something I noticed on a pupil's feedback section on his listening and reading test. In the "What was the best lesson" box on the right, I had lots of interesting answers. The speed dating lesson, the novel we've dipped into, watching Ma Vie de Courgette at the end of term, playing the Red Herring detective mystery on computers... But this pupil picked out the "Perfect" paragraph with clearly defined parameters. Now I need to go back and ask them why. Was it the clarity, reducing the cognitive load of having to think what to say? Was it the successful focus on accuracy? Is it a permanent thing we shoud do? Or was it a one-off timely intervention for that exact stage of their learning?



And here's Kirsty.



You will have seen that even in the "Perfect Paragraphs", perfection cannot be guaranteed. There is no way that a Year 9 pupil is going to be able to predict that in French you can't use "je veux" and "être riche" to say "I want my friend to be rich." There will always be ways in which French doesn't work in the same way as English. We can see here that a focus on merrily saying things the pupils wants to say, does lead to some distraction away from focus on accuracy of things she is supposed to know. But I am sure from a language-learning perspective, Kirsty's work is excellent.

I don't want to say that there's anything wrong with the approach pupils like Evelyn or Ruby can adopt when they need to, writing carefully accurate pieces that put paragraphs together from French they know, finding ways to use it to develop an idea coherently.

So I don't think there's any one right answer about language learning. But I do think there is a wrong one...

Of the pupils whose work we have looked at (I have others!), who is the pupil who is hoovering up most French? The pupil who has a grasp of all the French we have ever learned and can turn it inside out to say things she wants to say. The pupil who at the merest sniff of something new, can squirrel it away and add it to her repertoire. The pupil who found she couldn't say "would be" so has gone out of her way independently to find out how to say it. And then use it. The pupil who is prepared to make mistakes because she knows that's what happens when you push at the expanding boundaries of what you know and can do. The drive to express yourself is clearly central to language learning. And I cannot go along with the idea that we should stamp it out until pupils have flawlessly mastered the basics.

Sunday, 1 February 2026

The Language Lab Podcast

 I was a guest on episode 4 of  https://www.languagelabpodcast.co.uk with Will Marks. We examined in detail where we are with teaching languages in English schools, how we got here, and where we might aim to get to.

If you don't have Spotify you can listen on other sites like https://podcastaddict.com/podcast/the-language-lab-podcast/6007214

And Will made a nice AI image of us having a cup of malted milk in Hong Kong:


Do let me know what you think!

Sunday, 18 January 2026

Guest Post: A turning point...

In a major Camarón de la Isla / Bob Dylan / Judas Iscariot / Keith Jarrett adopts electric amplified music turn of events, I interviewed Gemini AI on a turning point in Language Teaching in English schools. Here is its summary of our conversation.


 The Bauckham Report: Why MFL is Finally Ready to Turn the Page

For nearly a decade, the 2016 Bauckham Report (the Modern Foreign Languages Pedagogy Review) has acted as the "north star" for language education in England. Led by Ian Bauckham and heavily championed by the Department for Education, it promised to rescue a "fragile" subject by returning to basics: Phonics, Vocabulary, and Grammar.

However, as we enter 2026, the cracks in this "knowledge-rich" foundation have become impossible to ignore. From its failure to understand the trauma of the "Controlled Assessment" years to its silence on the grading crisis, the report is increasingly seen not as a cure, but as a misdiagnosis.

Here is why it is time for the MFL community to move on.

1. Fighting a War That Had Already Ended

The report’s most significant flaw was its historical timing. It spent much of its energy attacking the "rote-learning" and "memorized chunks" that defined the Controlled Assessment (CA) era (2009–2016).

What the report failed to acknowledge was that teachers didn't choose rote learning because they hated grammar; they chose it because the assessment system punished anything else. By the time the report’s recommendations were being implemented, CAs had already been abolished. Teachers were already pivoting toward spontaneity for the new linear exams. The Bauckham report was essentially providing a map for a territory that had already changed.

2. The "Three Pillars" vs. The Human Element

The report introduced the "Three Pillars" model:

Phonics: Sound-symbol correspondence.

Vocabulary: High-frequency word lists.

Grammar: Systematic structural progression.

While these are essential building blocks, the report treated them as the entire building. By narrowing the focus to these mechanics, the curriculum often stripped away culture, identity, and real communication. Students might know the 1,700 most common words, but if they can’t use them to talk about their own lives or understand a different culture, the "why" of language learning disappears.

3. The Silence on Severe Grading

Perhaps the report’s greatest "blind spot" was grading severity. For years, data has shown that it is significantly harder to get a top grade in a language than in History or Geography.

By focusing purely on pedagogy (how we teach), the Bauckham report implicitly blamed teachers for the decline in student numbers. It ignored the fact that many students drop languages not because the lessons are "bad," but because they are "strategic." They know a language GCSE is a higher risk to their grade average. It wasn't until 2024 and 2025 that we saw adjustments from Ofqual to address this—an intervention the 2016 report should have championed from the start.

4. A "Prescriptive" Future?

The 2024 GCSE reforms (being examined for the first time this year, 2026) are the ultimate legacy of the Bauckham report. They feature:

Strict Vocabulary Lists: 1,200 to 1,700 "word families."

No Unfamiliar Material: A permanent removal of "abstract" or "unseen" language.

Dictation and Reading Aloud: A heavy focus on phonics.

While these changes aim for "accessibility," critics argue they make the subject "dry" and "mechanical." Instead of opening doors to a global world, the curriculum has become a closed loop of pre-defined words.



Conclusion: Time for a New Narrative

The 2016 Bauckham report was a product of its time—an attempt to bring "rigor" to a subject in crisis. But "rigor" without relevance is a recipe for further decline.

To save MFL, we must move beyond the "Three Pillars" and embrace a model that values the multilingualism of our students, the cultural depth of our subjects, and—above all—the simple joy of being understood in another language.



You can read more of my original interview with Gemini here, including its views on how an obscure, flawed report came to have such political influence. And in Gemini's words, become "weaponised" against the profession.

Thursday, 1 January 2026

Let's enjoy and celebrate!

 This post is to start the year with some absolutely delightful examples of pupils' work. Have a look and see if it brightens your day the same way it did mine!

Year 9 French Written Assessment October

Year 9 French Written Assessment October


These are done in test conditions without special warning or preparation. You can see from the tickbox criteria at the bottom that the pupils understand that writing spontaneously from French they know, will score at a different level than pre-planning and learning. In fact, I shouldn't have used the words "score" or "level", because the statements are descriptive and informative rather than linked to ranking or judgements.

You can see that the pupils have also commented on their work, starting with specifying that they challenged themselves to write this using their own repertoire of French. They may also have volunteered a comment on the quality of the work, or further information on their experience of the process of creating it. 

Here are some more.

Year 9 Writing Assessment October


Perhaps the most important thing for me here, is that this writing assessment is not an exercise in demonstrating that they can correctly use certain items of language. In all cases, the pupils are driven by wanting to say things. True things, fun things, silly things, imaginary things, sad things, vindictive things, and sometimes run out of things and not really know what to put things. Their comments at the end show that sometimes they know that things weren't quite "right" or that they took risks. They are doing this in the confidence that both they and the reader understand they are on a journey with language learning, where their ability to express themselves is central and being developed.

There are aspects of this work that I can follow up in another post: How does this written work correspond to their ability to speak with increasing fluency? What feedback should I give on the work? What does it show about chunking of language versus manipulation of atomised language? I have plenty to say on all of these, both to them and also on here. But for now, let's start the year by just enjoying and celebrating this!

Year 9 Written Assessment October


Monday, 22 December 2025

What do we mean by "meaning". And does it matter?

 In language teaching, we seem to be struggling with two different meanings of the word "meaning".

On the one hand we have "I know that tortue means tortoise". Where pupils are tested on their knowledge of the meaning of words. It's an approach that believes in regularly testing pupils' ability to parse sentences containing known words and known grammar, to cement memorisation and conceptualisation. The language is selected (by frequency of vocabulary) and sequenced (to exemplify step-by-step grammar concepts), so that the pupil's knowledge is built and reinforced.

On the other hand, we have the idea that language should be for pupils to express themselves and understand each other, to create and take part in communicating "meaning". This reminds me of the Spanish and French word for "to mean" - querer decir / vouloir dire - to want to say something.

Does it matter in language learning that our pupils learn to use their language to say things they want to say? Does it matter that what they read or hear has something to say, rather than just to practise and test their knowledge of language features?

We see this in GCSE and A Level listening and reading papers. What masquerades as a comprehension question often turns out to be asking pupils to show they can parse certain language features, even when they are not relevant to the purported question. At GCSE, we have seen questions like these. Things like "He didn't get on with his teachers" (which accurately characterises the understanding of the relationship) being marked as wrong. Because it didn't accurately parse the word "badly". Or at A Level, this question about what someone did one day. Answering the question (she went to see the castellers, she took a photo, she posted it online) is not rewarded. Because it doesn't show knowledge of the grammatical features it was appetising to her, she decided to... And these are not random rogue questions. This is a feature of how the examiners see meaning as simple demonstration of knowledge of the "meaning" of words and grammar. Not the meaning of what someone said.

It is also in Ofsted's guidance on curriculum design for languages. They insist that language be introduced in a strict sequence, based on exemplifying concepts, not based on teaching pupils to say things. The example they give is to teach pupils to talk about red dogs and red tortoises, and to avoid teaching green dogs and green tortoises until a future step. Because the adjective rouge is invariable for gender. Whereas vert would require knowledge of adjectival agreement if applied to a tortoise. All of which ignores the fact that if you are going to teach pets, pupils will want to talk about a range of pets in authentic colours. At this stage, pupils are making links from the language to the real world, rather than links and patterns internal to the language. This kind of real meaning is important to learners. And you wouldn't want to lose that!

But is it important to learning? Maybe that kind of meaning, with lots of pupils all trying to say random different things they aren't ready for yet, leads to them being given a collection of one-off things to say, that don't stay in long term memory and don't add up to coherent conceptualisation of the grammar of the language.

Here's an example that happened with my Spanish class last year. So they were the last class to take the old style GCSE. Back in Year 10, we had done some work on Shopping. Improvising answers in speaking, then writing them up. Most pupils did something that rehearsed the repertoire of opinions, reasons, tenses with a bit of conflict, conversation and disappointment thrown in.

But one pupil came up with this:



You can see it does contain the elements of the repertoire we practise using across topics. It has opinions, with reasons, direct speech, an element of conflict of opinions, use of imperfect and preterite ending in disappointment. You can see the underlying structures of the model aquarium story, both in terms of the repertoire of language and in how they are deployed.

But this pupil's answer was different to many others in the class. Because they were telling a true story. And a painfully personal one, with genuine and lasting disappointment.

Does this make any difference in terms of language learning? I am tempted to throw my hands up in the air and raise my eyebrows. Because of course we want to be equipping pupils to say things they actually want to say. Not teaching it as some kind of sudoku where they concoct answers to successfully fit all the required specified pieces into a pattern to show they can solve the puzzle. But even just in terms of long term memory and internalisation of language, does this pupil's work show something important?

Well. Here's what happened in Year 11. As we prepared for the exam, I did not let them look at their work from Year 10. They should have not only internalised the language, but also have been able to still deploy it.

The pupils who had cobbled together an answer to show they could use the language features, were basically starting from scratch. They had no memory of the work that they had done in Year 10.

Whereas the pupil who had written the story based on a true story that mattered to them personally, wrote this.



They were able to quickly and fluently reproduce several of their answers from Year 10 because what they had written was memorable. But if you look closely, it's not at all a case of word-by-word memorisation. It's actually a different version of the same story. And I am pretty sure if I listen back to their recorded GCSE speaking exam, they came up with another spontaneous version on the day of the exam.

Where does this leave the current GCSE speaking exam? AQA's marking of the Conversation by counting conjugated verbs seems to have gone entirely towards the parsing of language features, rather than the creation of meaning. As we see in this post, candidates who want to say things in genuine response to the examiner's question, do worse than pupils who trot out a list of three essentially meaningless verbs.

This exchange from my mock speaking exam, would score in the bottom band:

Have you ever had a problem abroad?

Yes. Once in a shop in Spain my mum forgot how to say 'coat'  in Spanish.

So...?

So we went to a different shop to buy a coat.

The pupil made the unforgivable mistakes of giving a genuine answer to the question, interacting with the examiner, and putting too much information into one clause.

Had they just said, I went to Spain. I went to a restaurant. I don't like to eat in a restaurant, then they would have been in the top band.

All my instincts and experience, such as the examples of the pupils in this post, are telling me that this is bad for language learning.




Saturday, 13 December 2025

AQA Conversation really not working

 At our latest ALL in the East meeting (scroll down for previous meetings on this ALL page), we discussed the implications of the AQA markscheme for the Conversation, in the light of conducting mock exams. The main area of concern was the way the need for information to be delivered in 3 clause chunks, each with a conjugated verb, did not make for a natural conversation with pupils interacting spontaneously with the teacher-examiner.

In the run-up to the mock, I had been confident that despite the confusion around AQA's marking of the conversation, it wouldn't in practice have too much effect. I imagined the stronger candidates would still do well. And the weaker candidates would still do less well.

I was wrong. Stronger candidates tended to put more information into a single clause. Stronger candidates tended to interact more with the examiner. Stronger candidates tended to give an answer that focused on saying something they wanted to say. All of these 3 things penalised them.

Here's an example.

Pupil: I always like to try to eat healthy food like fruit and vegetables.

Teacher: For example?

Pupil: For example yesterday I ate salad for lunch and a potato for tea.

Each of these answers is a single clause, putting this pupil in the bottom band. Their responses are both minimal. 

Had they said simply, I like to eat fruit and I love vegetables. I don't like salad, they would have scored in the top band, for an "extended" answer using three conjugated verbs.

One thing that is recommended in the AQA spec, is for the teacher-examiner to use short follow up prompts as I did in the example above, to elicit more information from the candidate. Things like, and...? so...? for example...? Why? And what if it rains...?

I have certainly used this in the previous GCSE, to interrupt and redirect a pupil who had a pre-learned answer to deliver, directing them away from stilted word by word regurgitation, and steering them into a more spontaneous interactive conversation. We also use it in teaching, for example with the conjunctions dice game described in this post, or in working with pupils explicitly on how an answer can develop logically and coherently.

After our discussions in the meeting, I contacted AQA to see if using these interjections to invite the pupil to develop their answer, would allow the two utterances in this example to count as developing the answer. So that my prompting, as recommended in the guidance on conduct of the exam, was allowing the pupil to show that they could continue their idea and spontaneously give further detail in interaction with the teacher-examiner. That's what I asked. And that is what I was hoping they might say.

Or, on the other hand, could it be AQA's decision that interacting like this actually penalises the pupil, because it means that what follows the "For example...?" doesn't count as developing the answer. It counts as a separate minimal answer.

That was indeed AQA's response. In this example from my mocks that I put to them, they determined that this is two minimal answers. This pupil would be in the bottom band.

This means several things.

Firstly it means that I am less likely to use these follow up prompts. Because a pupil who has already given information, could give some more. But may not have the required three items, if they already told me one or two things in their initial answer. Saying, "For example...?" may be trapping them into giving one further detail. So I will be pushed towards falling back on my list of starter questions, making the exam more of a predictable plod through a list of questions. Even though this is explicitly prohibited and undesirable.

Secondly, it means that I have to teach pupils to give answers in chunks of three conjugated verbs. This risks moving towards pre-prepared and over-rehearsed answers in order to achieve this. Again, undesirable.

Thirdly, it means that in conducting the exam, if a pupil only gives a single clause answer, or a two clause answer, I will have to sit and wait for them to add a third clause. Pupils will have to be trained to just say something. Without worrying if it is a logical development, or something they really want to say. As in the example above, just adding, I don't like salad would push you into the top band.

Fourth, and possibly worst of all, I have to train my best pupils to be more like the weakest. I have to train them NOT to talk naturally and put lots of information into a clause. I always like to try to eat healthy food like fruit or vegetables has to be replaced by minimal chunks of information each with a verb. I like cheese. I love cake. I don't like salad. 

Sorry. I just realised I accidentally and ironically used the word "minimal" to refer to what would be a top band "extended" answer. This markscheme is topsy-turvy.  In this universe, the clause that contained most information was the minimal one.

I must emphasise, that going into the mocks, I told myself that the marking wouldn't be too bad. But the experience did not, unfortunately, live up to that. There were numerous examples of strong candidates putting lots of genuine information into one clause. Here's an exchange with another pupil:

Have you ever had a problem abroad?

Err. Once, on holiday, in a shop in Spain, my mum forgot how to say "coat" in Spanish.

So...?

Oh. So we had to go to another shop to buy a coat.

This doesn't count as a pupil developing their answer in interaction with the teacher-examiner. This counts as two bottom band minimal answers.

I had to explain this to the pupil, using this example from their exam, to show them where they were losing marks. Their response, But that's not how a conversation works. 

Well. It is now.



Some limited good news:


I have heard whispers that pupils who extend spontaneously beyond 3 clauses might not be wasting their breath. I am waiting to hear back from AQA on this.

Have now heard back. Email from AQA a bit sniffly about answering the actual question because having said 3 clauses is "extended" they can't really say 4 or 5 clauses is even better. So they just say that it also counts as extended. And say that it may also help with variety of language so score more marks that way.

This is the problem. Firstly that their exemplification of extended as 3 clauses is only at the micro level of single answers. It doesn't show what is required over the whole conversation. And secondly, they refuse to understand that when they set the goalposts, teachers have to aim for them. 




Sunday, 23 November 2025

Is there Salvation for the GCSE Conversation?

 In a previous post, we looked at how this definition of "good development" is a joke.

AQA exemplification of "amount of information" in the new GCSE spec


Choosing the example "I don't like social media because it is boring" as the definition of "good development" is knowingly taking exactly the sort of answer we don't accept from pupils and holding it up as desirable at GCSE. The post recognises it is doing the important job of signaling that memorising long fancy answers is not required. But that job should have been done by the parameters of the task, not by the markscheme.

So we've ended up with a markscheme that defines as "good development" something which patently is not an example of good development.

The exam board have rescinded the 17 question guidance for marking. Although I have yet to see any information about its promulgation or withdrawal on any AQA site. But they cannot rescind the markscheme. Because it's in the specification.

Let me give you one example of what this would mean. Out of these two answers, which would score higher?

¿Tienes un restaurante favorito donde te gusta ir para celebrar una ocasión especial?

a. Me gusta Ed's diner. Es grande y es divertido.

b. Pues, en el pasado, siempre me gustaba ir a festejar el cumpleaños de mi hermana menor en un restaurante pequeño cerca de mi casa.

The answer is of course, a.

Answer a. has three clauses and is not just well developed. It is an extended answer. Answer b. is a minimal response with just one clause with pieces of information added on. A minimal response.

Or try this one:

¿Te gustan los animales?

a. Sí. Tengo un gato. Mi gato es grande. Mi gato es negro.

b. Sí, y algún día espero ser veterinario en mi propio consultorio.

Answer a. again is an extended answer. Two bands above answer b. Which is only a minimal answer.

And of course, in both cases, the pupil attempting answer b. would be more likely to fall into error. So we should strongly advise pupils against this kind of answer which does not score well for development and could also lose marks for accuracy.

The exam board when they withdrew the 17 question guidance were shocked that schools would "game the exam" by training pupils to give 17 accurate 3 clause answers. This is the problem. We have an exam board setting the goal posts. Then bemused that schools aim for them.

We have already seen this in the Photo Card where the marking guidance actively disincentivises good teaching. And in the questions following the Read Aloud task, where the best tactic is to say 3 random things linked to the topic of the question.

The problem is that there is no credit for true development. For coherence. For the three clauses to be linked. Or convincing or personal.

It has come down to saying three accurate clauses. Only got 2 clauses? Train your pupils to throw in a third formulaic add on:

Sí, y algún día espero ser veterinario en mi propio consultorio. Si puedo.

Sí, y algún día espero ser veterinario en mi propio consultorio. Creo yo.

Sí, y algún día espero ser veterinario en mi propio consultorio. Me gusta la idea.

Sí, y algún día espero ser veterinario en mi propio consultorio. ¿Por qué no?

That would lift those minimal answers to "good development". Although still not good enough to be "extended".

It's a markscheme that rewards inanity.

It's a markscheme that is inane.


Is there anything we can do?

Yes. There is. But it's going to take a bit of sophistry and exegesis. Because the exemplification is written into the specification and will have to be interpreted in the same way jurists look at the intention of the framers of a sacred text like the American Constitution.

This would mean making a nice distinction with the exemplification in the specification: it is there to exemplify. NOT to define. So although an inane 2 clause answer would qualify as "well developed", a coherent answer should qualify as "better developed" than an inane response. Over a 5 minute conversation, a candidate who can be personal and coherent, should see their responses rewarded over the candidate whose responses come in 2 clause bundles, but where the information is basic and the purported links are inane.

And a pupil whose responses over a 5 minute conversation take answers and extend them logically, coherently, with convincing detail and examples (even if it means making a few more mistakes) could be rewarded more highly than responses made up of random assertions bundled into a 3 clause answer but with no real link, lacking anything that would actually be worthy of the name "development".

The exam board could stop counting clauses as the Church stopped counting angels dancing on the head of a pin. They could allow the examiners to take into account whether the information in the answer was coherent, personal, linked, thoughtful, interesting. An answer that was going somewhere rather than an answer that is going nowhere. Because although the overall criterion is "amount of information", it is broken down into the idea of "good development". And coherence is surely a factor to be considered when looking at "good development".

The exam board have made their point with the exemplification that long fancy memorised answers aren't wanted. The exemplification has dealt with what is NOT wanted. It's done its job. But it mustn't define what pupils must do in the exam. The exam board clearly confessed this when they bemoaned teachers "gaming" the guidance. So they need to not be pinned down by it.

The exemplification applies to the micro level of utterances. What they hadn't thought through was what their goal posts mean for a 5 minute conversation. Clearly, "because it is boring" is not going to see a pupil through 5 minutes.  This was the purpose of the apocryphal 17 Questions. They were an attempt to extend the micro exemplification to the whole conversation. An interpretation which as it was not in the seminal text, after a few days in limbo, could be withdrawn. Especially as it tried to do away with the timings, which although "recommended", were in the framers' original text.

So let's pin our theses to AQA's door. The conversation should be around 5 and a half minutes, as stated in the specification. The conversation should not be rote memorised fancy answers, as exemplified in the specification, nor conducted via a list of pre-ordained questions. The criteria for marking are for the amount of information, including how well this is developed and extended. The specification exemplars take us only so far with this, exemplifying how fancy long answers are not wanted. But they don't show us what a 5 minute conversation of developed or extended answers should look like. A conversation with give and take with the examiner, genuine questions about the pupil and their ideas. Some of these will be developed in more detail than others, and a good candidate will have genuine and convincing development, with a level of coherence and exploration. The intention of the framers was not to reward counting inane 3 clause clusters of meaningless language, deliberately kept simplistic but accurate.

And the whole catechism of I have a cat It is big It is black thing brings the GCSE into unsustainable disrepute when Year 9 are attempting things like this (below). We cannot have a GCSE that rewards inane 3 clause bundles over genuine development and whose cardinal role is to hold back pupils' expression and bind them to arcane formulaic responses. Don't know where this religious metaphor came from. But it does feel as if we've gone back a few hundred years to something on the verge of collapse! Read this. It will cheer you up:




Friday, 24 October 2025

Marking the Conversation at GCSE (AQA) -- Not funny

 There are two meanings to the word "a joke". One is something deliberately risible, to make those in the know laugh. The other meaning is that something is an object of derision. Sometimes both meanings coincide. As in "this definition of Good Development" is a joke.



This is from the new specification for GCSE in French, German and Spanish from AQA. And anyone who knows anything about teaching languages will spot the joke. I like to eat carrots because it is interesting is the kind of desperate answer we don't accept from pupils. Our teaching consists of trying to move them on from this kind of answer. There are teachers who ban it is interesting because it's seen as the last resort of pupils who can think of nothing to say and have turned up to an exam utterly unprepared.

Yet here we have "I don't like social media because it's boring" as the very definition of "Good Development."

What is it trying to signal? Two clauses is what is considered a well developed answer. And three clauses counts as an extended response. It is signalling that long pre-learned answers are not required in order to perform well. Its message is all about what is NOT wanted, rather than thinking through what might be required.

In exactly the same way, it is signalling that deliberately fancy expressions thrown in to wow the examiner, are not wanted. In terms of amount of information, it is boring is no worse than a pre-learned autant que je sache. Unfortunately it also means it is boring is just as good as a thoughtful because it takes up too much of my time.

We are looking at very complex arguments being played out, about the nature of the level of difficulty in languages. There are lots more ironic jokes at play here. Like the fact that the autant que je sache was a favourite of teachers most strongly associated with the "Knowledge Curriculum". Supposedly to show how well their pupils could do if taught "properly". When all they were doing was showing that the supposed hierarchy of difficulty is bogus. Just as using je vais used to be a whole National Curriculum level higher than je dois because je vais is the future.

Revenons à nos moutons:



This really should have had no place in the Specification. It too clearly smacks of in-jokes and point scoring in the spat between exam boards and the GCSE panel in the creation of the new exam. It is too focused on what is NOT wanted (rote answers and fancy expressions), rather than thinking through what IS wanted. As such, it could have been guidance on the conduct of the exam rather than the marking criteria. In fact, it is strongly emphasised everywhere that having a narrow list of questions that all pupils are going to be asked, is malpractice.

Given the parameters of the exam, what might be wanted? Firstly, it was advertised as a conversation of between 4 and a half and 5 and a half minutes. On just one theme. So twice as long as the previous GCSE, which had a similar length conversation, but on two themes. Clearly, I don't like social media because it is boring is not going to see you through 5 minutes. Five minutes of such short answers would require about 30 questions in a relentless back and forth. I don't think I have 30 questions on most of the topics in these themes. And I don't think pupils have 30 variations on I don't like... because it is... So clearly, while the exemplars in the specification served their tangential purpose of sending a strong message as to what was NOT wanted, we had to figure out for ourselves what was wanted.

And it seemed reasonable to think that if we teach pupils to develop their answers spontaneously, and to respond to prompts from the teacher which would interrupt any pre-planned answer, then this would be rewarded.

The idea that a pupil who extended their answers spontaneously would be penalised is ridiculous. Or that a teacher who interjects Why? For example? And so? And then? would be penalising their pupils, is also ridiculous.

That is what happened with this week's guidance, now apparently hastily withdrawn. Although I have yet to find anything official from AQA either presenting the guidance or withdrawing it.


A pupil who extended their answers spontaneously, would not necessarily reach the 17 questions total. A teacher who interrupts to prompt or redirect a pupil, pushing them to spontaneously develop an answer, would fragment the 3 clauses into a series of "minimal" answers.

Thank goodness AQA did publish the guidance. Imagine if it was being marked this way. And pupils who spoke and interacted spontaneously were marked down for answering fewer questions or for having fragmented clauses responding to the examiner's interjections. And we wouldn't know why it was happening.

This is the key thing that AQA missed. They think they have to quantify "amount of information." And they think it's only fair to publish it. What they fail to realise is that this then determines what answers we have to train pupils to give. Instead of evaluating what pupils say, the exam board are determining what they have to say. We have to train them to answer 17 questions with 3 clauses (some may fall short of 3 clauses). And because everyone will be doing this, it throws the emphasis onto the other criteria: accuracy and variety.

17 answers carefully box ticked, carefully accurate, deliberately including variety. This is recreating the exact conditions for fancy rote-learned pre-prepared answers. The very thing they were trying to get away from.

NOT Funny.


Sunday, 19 October 2025

Part 3 of A Spanner in the Works. AQA Guidance for Marking the Conversation.

I don't know where to find this information officially from AQA, but I am hearing there is going to be a re-think. I don't know what will replace it. Hopefully a 5 minute conversation as specified in the spec. Note also that the "I don't like social media because it is boring" exemplification of "good development" is in the spec. I think the problem lies in the way the guidance set hoops to jump through that were going to determine/distort/limit pupil performance instead of assessing their performance. Same goes for the photocard guidance.


 Now we know that pupils will NOT have to speak for 5 minutes on one theme in the Conversation, what should their answers look like?

We have to interrogate the exemplars from the specification. They are likely to creak under this exercise, as they were originally intended to be examples. But now they are being forced into the role of definitions of "extended" and "good".

Here they are:



You can see that they have been chosen to exemplify that pupils will NOT need to have extended answers or fancy language. The choices are deliberately, even knowingly at the level which previously we have aimed to move pupils away from. Now "I don't like social media because it is boring"  is the definition of Good development of an answer. The GCSE panel wanted to remove the Conversation because it led to rote delivery of long answers containing fancy language. The exam boards put the Conversation back in, and are signalling that it will not reward long memorised answers and deliberately inserted fancy language.

Of course, this also avoids rewarding pupils who can spontaneously develop answers and naturally use sophisticated language as part of a complex narration.

Looking closer, we can see that "amount of information" is being interpreted in a weird grammatical way. The exemplar for "extended response" includes three clauses.

So I think we are to assume that an answer delivering more information, but all in one clause, would not count as extended.

I love to go to the cinema in Norwich with my friends or family but not on my own to see an action film or another good film most weekends in a cinema with a big screen and a great sound system.

This example only has one conjugated verb I love... And although it contains a greater "amount of information" than the exemplar, we would have to count it as "minimal development". "Minimal development" of the "amount of information".

So verbs are crucial. Not the "amount of information".

What about the fact that the exemplar for "extended response" contains three different verbs. This is all we have to go on. So are we to assume this is also a requirement? What if I repeat the same verb?

I love to go to the cinema and I love to go to Norwich. I love to go with my friends or my family, but not on my own. I love to see an action film but I also love other sorts of film and I love to go most weekends to a cinema with a big screen and I love a cinema with a great sound system.

Is that now an "extended response"? Or is it disqualified because it is the same as the earlier "minimal response" with the verb repeated?

This makes a difference. One of these would be "good development" and the other one would be "an extended response"? Or not?

I like to play tennis because it is fun and exciting.

I like to play tennis because it is fun and it is exciting.

And the overriding question remains. Is "I go to the cinema and I watch films. I love films" really what is required for a grade 9? If so, we have got an awful lot of thinking to do about what we are teaching.

Of course, this exemplification was there all along, and isn't changed by the new 17 question guidance.



What is changed, is the dropping of the requirement to talk for between 4 and a half and five and a half minutes on just one theme. This has been replaced by the requirement to give short simple accurate answers with 3 verbs for 17 questions (some of them can fall short of 3 clauses). 

What also has changed, is that everyone will make sure that pupils can tick this box, so the Conversation is now the equivalent of Controlled Assessment. Planned and prepared against a tickbox that everyone meets, so effectively irrelevant in its effect on the grade. And remember, AQA have already done the same thing to the Photo Card. We are right back in the bad old days of 2016 and the Baukham report, with the wrong answer to the wrong problem.

This is exactly what this new exam was meant to avoid. And exactly what I feared it would do right from the start. An exam explicitly designed to change the way we teach. Ends up ruining language teaching again.

Part 2 of A Spanner in the Works. The AQA guidance on marking the Conversation.

I don't know where to find this information officially from AQA, but I am hearing there is going to be a re-think. I don't know what will replace it. Hopefully a 5 minute conversation as specified in the spec. Note also that the "I don't like social media because it is boring" exemplification of "good development" is in the spec. I think the problem lies in the way the guidance set hoops to jump through that were going to determine/distort/limit pupil performance instead of assessing their performance. Same goes for the photocard guidance.


 This is going to make a lot more sense if you have read Part 1 of how AQA have thrown a Spanner in the Works for how my department teach the Conversation part of the Speaking Exam.

A huge amount of thinking, collaboration and planning has gone into teaching this new GCSE, and in particular, the new Speaking Exam. Our KS3 is designed to teach pupils how to use a growing repertoire of language across topics, with an emphasis on not just learning more language, but on learning how to use it. Pupils work on thinking up what to say, how to make it personal, coherent, interesting and developed.

We start Year 10 with Module 0, showing them how their KS3 French already enables them to tackle the role play, unexpected questions and some conversation questions. In Year 10, we build up language, carefully transferring it across topics, and making sure pupils see how they can deploy it in the exam. I feel we are doing our best to put in place best practice, in dialogue with the ideas behind the new GCSE.

Last year we had the opportunity for Year 10 to do a Speaking Exam. Rather than an exam, it was more of a run-through, to familiarise them and us with the elements and demands of the exam. They had the Role Play, Read Aloud and Photo Cards in advance, so they could turn up and do the Exam in 10 minutes without the need for invigilators or prep time.

What did we discover? Not to be afraid of the exam. The Role Play - short answers containing a verb. The Read Aloud - stunning. The Unexpected Questions - a bit of explaining that you have to guess what you think the question is, say something related, then say a couple more random things that might be related. The Photo Card - say there is or is 8 times for each picture, without risking trying to say anything else. (Post here on the negative effect of the AQA marking guidance on the Photo Card.)

That left the Conversation. We had NOT prepared answers to a list of questions. But pupils knew they would get questions that they could answer using their repertoire of opinions, reasons and tenses. They knew that we would prompt them for more using and, so, for example, why...? We didn't stick to one theme, but used it as an opportunity for them to show off their French across all different topics.

So what we discovered was that their French was up to the task. But the demands (at the end of all the other tasks) and the cognitive load of thinking up what to say and how to say it in French, was too much. After a while their answers ran out of ideas and became repetitive, or we had to switch topics to keep them going. Or we said "Well done" and stopped before the full five and a half minutes.

This then, was our focus for going into Year 11. Tweaks to the Scheme of Work. The Department Plan. Inset in September and department meetings. Individual teachers' Performance Management Targets. All in a coherent focus on managing the balance between having ideas prepared, but not memorising answers. Managing the balance between preparation of ideas, and spontaneous improvisation of the French. So that the pupils could talk for 5 minutes on just one theme (double the time compared to the previous GCSE) without having memorised answers. What is the best way to teach pupils to talk for 5 minutes? Prepared answers is not the best way. Because the more prepared the answer, the quicker it is to deliver and you end up having to learn ridiculous amounts to fill the time. Better to have a genuine conversation, with some ideas prepared, but making up the answer in response to the examiner's questions. A balance of prepared ideas, but spontaneous French.

So we do have a booklet of possible starter questions for the conversation. And pupils are challenged to answer the questions spontaneously in speech and to plan their ideas in writing. They do not memorise their answers, but they do have their ideas ready. We have been careful to mix the questions up across topics so the pupils are deploying the same repertoire irrespective of theme, and there is definitely no set list or order of questions. When we come back to practising questions, we don't let them look at their planned answers - they have to improvise a new answer based on the ideas they had come up with, just as we did with the previous GCSE. We work on creating answers in layers. So they can give an immediate response. Then back it up with reasons or if sentences or examples in past or future. They know that the teacher will prompt for this kind of extended detail with follow-up questions such as et... ?  alors...? par exemple...? Pourquoi ? The conjunctions dice game in the second half of this post has featured heavily in getting them to extend answers and respond to being pushed in different directions by the throw of the dice. We have worked hard on the different directions a story can go in, with one idea leading to another, so you don't get stuck thinking up what to say next, as in the mouse and the cookie. We have even turned the order of Year 9 units around, to start with developing ideas into stories

A huge collaborative and joined-up effort of the entire department, based on taking stock from the Y10 Speaking, and gearing up for the mock speaking next month.

Then AQA put out their guidance and it's hard not to feel as if the rug has been pulled out from under our feet.

It's not the 17 questions. They were always going to have to define "amount of information." Although defining it means everyone will make sure they meet it. With a planned and monitored set of questions. And because everyone meets it, the emphasis that swings the grade will fall on the other criteria: accuracy. And it's not the fact that AQA "extended answers" mean very short basic answers. Although a requirement for a set number of 3 clause accurate answers is perhaps best met through planned, prepared, rote learned answers. And it's not even the fact that redirecting prompts like and... so... for example... why?  would now invalidate the pupils' responses, by breaking up the 3 clauses.

Well, yes. It is all that. But the main thing is the removal of the timings. If you no longer have to fill 5 minutes (remember in the old GCSE, there were minimum times on each theme), then you no longer need to have a repertoire you can riff on confidently and indefinitely.

It's not hopeless. We just have to adapt. First we have to audit our questions for each theme. Are there 17 questions? If we include short prompts as further questions, can we do this without disqualifying the pupils' answer from reaching 3 clauses? How many more questions do we need so as to avoid repeating the same questions? How do we allow pupils to show what they can do in terms of inventive longer answers, but still get through 17 questions? Is the AQA exemplar answer "I go to the cinema and we watch action films. I love action films" really going to get a grade 9? If what our pupils can do is superfluous to requirements, what elements of the exam should we have been focusing on? And if the thing that is going to swing the exam is now the accuracy marks, does this mean our pupils now should learn scripted answers off by heart?

Part 1 of: A Spanner in the Works? The new AQA guidance for marking the Conversation.

I don't know where to find this information officially from AQA, but I am hearing there is going to be a re-think. I don't know what will replace it. Hopefully a 5 minute conversation as specified in the spec. Note also that the "I don't like social media because it is boring" exemplification of "good development" is in the spec. I think the problem lies in the way the guidance set hoops to jump through that were going to determine/distort/limit pupil performance instead of assessing their performance. Same goes for the photocard guidance.


I don't know where to start with this post. Or where it is going. But I think I know what the key problem is. And it's NOT the 17 questions. Do I know what to do about it? I'm working it out. But it may take more than one post...

This is going to be big. It wasn't supposed to be. It was supposed to quietly define "amount of information" in the Conversation part of the new AQA GCSE Speaking Exam. 



It was even supposed to disincentivise rote learning of scripted answers. I'm not sure how. Because even my immediate reaction to this is to check how many questions I have for each theme, and how many of them pupils would have extended answers for. And I live and breathe spontaneous answers in my teaching.

It's worth mentioning straight away that an AQA "extended answer" is not what we understand by an extended answer. For our Year 9s, working on extended answers means things like the examples below, moving from a random stream of French, to coherent answers, to past tense stories with cheats, to telling stories. These are written examples, but we spend much more time working on speaking and spontaneity, with strategies like Being Ben or telling stories round the class to develop pupils' ability to think what to say next.




No. For AQA, an extended answer looks like this:



Three clauses of particularly uninspiring language, containing an opinion and a conjugated verb. The example given for "Good Development" because it is boring seems a deliberately knowing and sarcastic inclusion. Because this is clearly a response to the initial attempt to do away with fancy pre-learned answers by the GCSE panel when they originally proposed getting rid of the Conversation completely.

Both the GCSE panel and the exam board in their different ways are trying to get rid of pre-learned scripted rote answers.

But I can't see how this is not going to mean a return to rote learned answers. The ticking off of a specified number of answers means teachers having to carefully plan and keep track. Everyone will be making sure they hit the magic number. This then means that what differentiates one pupil's performance from another will be the criteria for Accuracy. And the need to deliver a set number of highly accurate answers will lead to... rote learned answers.

Is the number of questions so prohibitively high that no-one would dream of learning that amount? 17 three clause answers for each of three themes. With lots of cross-over where a question could be used in more than one of the themes. This is prime "learn by rote" territory.

I actually don't think the 17 answers is the problem. They were always going to have to define "amount of information". And I already suspected that the reduction in topic content was going to shift the balance back to pre-learned answers.

The actual problem is the ditching of the times. Nominally, the Conversation is supposed to last between four and a half and five and a half minutes. A long time to talk on just one theme. The old GCSE Conversation was this long over two themes. So even though I knew that 3 clauses was all that was required for an "extended" answer, I had spotted that filling 5 minutes was going to need pupils to have more to say. Our pupils work on developing answers spontaneously, responding to teacher prompts such as et alors... ? par exemple... ? Pourquoi ? I will look at exactly where we are up to in terms of being able to riff on these prompts to fill 5 minutes in a later post. But all that may now have to go. Perhaps we were fooling ourselves all along that it was what was wanted.

And the five minutes also could have been a disincentive to learn and deliver pre-learned answers. A pre-learned answer, delivered fluently takes up less time than an improvised answer. Like Achilles chasing the tortoise, the more you fall back on pre-learned answers, the more you find you have to say.

So we were pleased to convince ourselves that improvised answers with the teacher intervening to prompt for more detail, was the best way to fill five minutes.

That's what's gone. It's not 5 minutes anymore. We're left with the requirement to give 17 short but accurate answers. How does this not tip the balance back towards having prepared answers?

Incidentally, this is exactly the same mistake that AQA have made with their interpretation of the specification markscheme for the photo card. With similarly negative consequences for teaching and learning, as I found in this post.

What about the idea of the examiner prompting the pupil for more detail, to push the pupils to extend and develop? Things like et alors... ? pourquoi ? par exemple...? all count as questions, so would make it easy to get to the 17 number. But what they also do is fragment the "extended answer" into single clauses. So instead of demonstrating the pupil's ability to extend spontaneously, they now disqualify the answers from counting as "extended" as each response may now fail to meet the 3 clause threshold.

I have plenty more to say about exactly where we are and what to do next. But that's enough for now. It's NOT the 17 questions. It's the ditching of the 5 minutes. That changes everything.


I know the exam board had to define "amount of information" and don't want to see rote learned answers. They will have tried out how the marking works out on sample recordings of conversation. Have they done the opposite? Have they tried out what sort of conversation you get when you specify 17 short accurate answers? I hope they are right that this means we are still better off teaching pupils to extend their answers spontaneously. I'll explore that in another post...


Thursday, 9 October 2025

Recent Posts on the Way Forward for Languages.

Here's selection of recent posts on the two systemic problems facing MFL, why they are so bad, and what we could do about it.


 The impact of unfair grading:

https://whoteacheslanguages.blogspot.com/2025/08/unfair-grading-and-its-impact-in.html

Could the success of the languages for all pilot offer hope for being able to offer mainstream language learning post 16? https://whoteacheslanguages.blogspot.com/2025/07/hope-is-in-air.html

The two things that need sorting to allow MFL to flourish. https://whoteacheslanguages.blogspot.com/2025/05/can-we-sort-out-languages-in-english.html


A Level Spanish 2025. How they make the exam too hard for even the tiny minority who take it. https://whoteacheslanguages.blogspot.com/2025/06/really-cool-translation-challenge.html

How bad is the reformed A Level? https://whoteacheslanguages.blogspot.com/2025/05/the-problem-with-this-level.html

How they make the A Level about obscure grammar even when it isn't relevant to answering the purported question. https://whoteacheslanguages.blogspot.com/2021/12/why-i-dont-call-it-summary-question.html 

And worth getting it straight from the horse's mouth. Look at the contempt for language learning in this submission from the people who "reformed" A Level: https://alevelcontent.wordpress.com/wp-content/uploads/2014/10/alcab-rationale-for-english-essay.pdf

The ugly reason why things are so bad for MFL post 16. https://whoteacheslanguages.blogspot.com/2023/10/colonial-curriculum.html

One easy thing to put right: https://whoteacheslanguages.blogspot.com/2025/01/one-thing-that-costs-nothing-which.html

Sunday, 5 October 2025

Am I about to come unstuck? - How much can you rely on a metaphor for learning?

 How much can you rely on a metaphor for learning to guide your practice? All models of learning are metaphors. Starting with the popular "storage and retrieval" model. This seems a particularly circular metaphor, based on comparing the brain to computer memory, which in turn is a metaphor based on human memory. Metaphors for the brain often go hand in hand with current technology. This post looks at how previous models included cogs, hydraulics, cables... And of course in languages we have been presented with the metaphor of pillars which I examined in this post, showing how the metaphor revealed more than I expected: carefully constructed classically impressive pillars of free-standing stand-alone grammar, vocabulary and phonics, was deliberately an act of "folly".

You will know that my favourite metaphor for language learning is the snowball.

A few years ago, the day after a light snowfall, I was walking round the school with a pupil who had been sent out of his French class, to calm things down. He was telling me he didn't mind French lessons, but he just didn't know any French he could use. We stopped and I asked him where all the snow from yesterday had gone. He said, "It all melted, Sir." I asked him, "And where's your French?"

He was quick on the uptake (he is now a vicar, after a time in the police force), and said, "Oooh. Nice metaphor, Sir." He had been there in lessons while all the French was happening, but he hadn't managed to grab hold of any, roll it into a snowball, and stop it from melting.

This is the first use of the metaphor. To warn pupils that their French will melt. That it's their responsibility to grab hold of some and make it theirs. To roll it into a ball and stop it melting. And that more and more French will stick to it.

Then there's the message to teachers. We need to spend time making sure that pupils have a core of sticky French. That they are making it theirs and not letting it melt. It's important that our curriculum is designed so that we develop this core of language, using the same language over and over. And it's important not to design a syllabus where everything is ticked off once. The metaphor tells us that an even coverage of language will melt. What we want is a snowball of language that rolls on from topic to topic, getting bigger and bigger around a sticky core.

This post examines how to design a syllabus where new language adds a layer of accretion to the snowball. It starts from how to add new language to the pupils' existing language. Not by chopping up the language into bitesize chunks of omelette and hoping the pupils can make their own omelette out of it. Mixed metaphor alert. But cooking an omelette out of raw egg is the equivalent of the snowball approach. Chopping up the cold dead omelette is the equivalent of the even coverage approach.

So far so good. But how far does the snowball approach get me with the new GCSE?

For the speaking and writing, it's fundamental to our vision. We use it explicitly in our resources to show pupils how to tackle the demands of the speaking exam, whatever the topic. 

But this new GCSE has a huge gulf between the language needed for the Speaking and Writing exams and the vocabulary list for the Listening and Reading exams. The vocabulary list is not designed to be based on the language needed for the topics or for the tasks of the exam. For example when you get to the Jobs unit, there are fewer than 10 jobs words in the list. The topic of Jobs is just another arena to meet the non topic vocabulary. It's the even coverage idea reimagined. This time it's meant to be such intense snowfall that layer upon layer of French has fallen, before the previous layer has had time to melt. It risks leaving my pupils with their pathetic snowballs they were so proud of, lost in a snowy wastes landscape that stretches off to the horizon. Or at least that's what it's starting to feel like.

But is anyone achieving this deep layer upon layer of snow? Does it mean having to do listening and reading activities from the textbook totally by the book, missing out nothing because without the intensity of repeated snowfall, melting will happen? To achieve this, we would have to abandon the lessons focused on getting pupils better at using their snowball of French, all the lessons on practising speaking, thinking what to say next, getting really good at using their snowball. Do the textbooks actually deliver the meticulous coverage and re-coverage required for this permanent even coverage not to melt?

Anyway, for me and my Year 11s, who only started Spanish in Year 10, it's too late. The snowball approach is doing its job for the speaking and writing exams. Are we scared of the vocabulary for the listening and reading exams? What I am hoping is that having such a huge snowball will take care of it. Now their snowball is so big with all the things they can say or write, surely the little rocks, bits of grass, sticks, abandoned carrots and coal from other people's melted snowpersons... it can all stick to their snowball as they roll it round and round...? Can it?

This is the idea. That by making sure the pupils have their own snowball which has got bigger and bigger, more and more Spanish will stick to it. Including words that aren't nicely adding a natural layer, but which are odd words that don't seem to stick, but get swept up along with the snow.

Can I rely on this metaphor to get me and my pupils through the exam? Or will the whole thing come unstuck?