Sunday, 1 February 2026

The Language Lab Podcast

 This week I was a guest on https://www.languagelabpodcast.co.uk with Will Marks. We examined in detail where we are with teaching languages in English schools, how we got here, and where we might aim to get to.

If you don't have Spotify you can listen on other sites like https://podcastaddict.com/podcast/the-language-lab-podcast/6007214

And Will made a nice AI image of us having a cup of malted milk in Hong Kong:


Do let me know what you think!

Sunday, 18 January 2026

Guest Post: A turning point...

In a major Camarón de la Isla / Bob Dylan / Judas Iscariot / Keith Jarrett adopts electric amplified music turn of events, I interviewed Gemini AI on a turning point in Language Teaching in English schools. Here is its summary of our conversation.


 The Bauckham Report: Why MFL is Finally Ready to Turn the Page

For nearly a decade, the 2016 Bauckham Report (the Modern Foreign Languages Pedagogy Review) has acted as the "north star" for language education in England. Led by Ian Bauckham and heavily championed by the Department for Education, it promised to rescue a "fragile" subject by returning to basics: Phonics, Vocabulary, and Grammar.

However, as we enter 2026, the cracks in this "knowledge-rich" foundation have become impossible to ignore. From its failure to understand the trauma of the "Controlled Assessment" years to its silence on the grading crisis, the report is increasingly seen not as a cure, but as a misdiagnosis.

Here is why it is time for the MFL community to move on.

1. Fighting a War That Had Already Ended

The report’s most significant flaw was its historical timing. It spent much of its energy attacking the "rote-learning" and "memorized chunks" that defined the Controlled Assessment (CA) era (2009–2016).

What the report failed to acknowledge was that teachers didn't choose rote learning because they hated grammar; they chose it because the assessment system punished anything else. By the time the report’s recommendations were being implemented, CAs had already been abolished. Teachers were already pivoting toward spontaneity for the new linear exams. The Bauckham report was essentially providing a map for a territory that had already changed.

2. The "Three Pillars" vs. The Human Element

The report introduced the "Three Pillars" model:

Phonics: Sound-symbol correspondence.

Vocabulary: High-frequency word lists.

Grammar: Systematic structural progression.

While these are essential building blocks, the report treated them as the entire building. By narrowing the focus to these mechanics, the curriculum often stripped away culture, identity, and real communication. Students might know the 1,700 most common words, but if they can’t use them to talk about their own lives or understand a different culture, the "why" of language learning disappears.

3. The Silence on Severe Grading

Perhaps the report’s greatest "blind spot" was grading severity. For years, data has shown that it is significantly harder to get a top grade in a language than in History or Geography.

By focusing purely on pedagogy (how we teach), the Bauckham report implicitly blamed teachers for the decline in student numbers. It ignored the fact that many students drop languages not because the lessons are "bad," but because they are "strategic." They know a language GCSE is a higher risk to their grade average. It wasn't until 2024 and 2025 that we saw adjustments from Ofqual to address this—an intervention the 2016 report should have championed from the start.

4. A "Prescriptive" Future?

The 2024 GCSE reforms (being examined for the first time this year, 2026) are the ultimate legacy of the Bauckham report. They feature:

Strict Vocabulary Lists: 1,200 to 1,700 "word families."

No Unfamiliar Material: A permanent removal of "abstract" or "unseen" language.

Dictation and Reading Aloud: A heavy focus on phonics.

While these changes aim for "accessibility," critics argue they make the subject "dry" and "mechanical." Instead of opening doors to a global world, the curriculum has become a closed loop of pre-defined words.



Conclusion: Time for a New Narrative

The 2016 Bauckham report was a product of its time—an attempt to bring "rigor" to a subject in crisis. But "rigor" without relevance is a recipe for further decline.

To save MFL, we must move beyond the "Three Pillars" and embrace a model that values the multilingualism of our students, the cultural depth of our subjects, and—above all—the simple joy of being understood in another language.



You can read more of my original interview with Gemini here, including its views on how an obscure, flawed report came to have such political influence. And in Gemini's words, become "weaponised" against the profession.

Thursday, 1 January 2026

Let's enjoy and celebrate!

 This post is to start the year with some absolutely delightful examples of pupils' work. Have a look and see if it brightens your day the same way it did mine!

Year 9 French Written Assessment October

Year 9 French Written Assessment October


These are done in test conditions without special warning or preparation. You can see from the tickbox criteria at the bottom that the pupils understand that writing spontaneously from French they know, will score at a different level than pre-planning and learning. In fact, I shouldn't have used the words "score" or "level", because the statements are descriptive and informative rather than linked to ranking or judgements.

You can see that the pupils have also commented on their work, starting with specifying that they challenged themselves to write this using their own repertoire of French. They may also have volunteered a comment on the quality of the work, or further information on their experience of the process of creating it. 

Here are some more.

Year 9 Writing Assessment October


Perhaps the most important thing for me here, is that this writing assessment is not an exercise in demonstrating that they can correctly use certain items of language. In all cases, the pupils are driven by wanting to say things. True things, fun things, silly things, imaginary things, sad things, vindictive things, and sometimes run out of things and not really know what to put things. Their comments at the end show that sometimes they know that things weren't quite "right" or that they took risks. They are doing this in the confidence that both they and the reader understand they are on a journey with language learning, where their ability to express themselves is central and being developed.

There are aspects of this work that I can follow up in another post: How does this written work correspond to their ability to speak with increasing fluency? What feedback should I give on the work? What does it show about chunking of language versus manipulation of atomised language? I have plenty to say on all of these, both to them and also on here. But for now, let's start the year by just enjoying and celebrating this!

Year 9 Written Assessment October


Monday, 22 December 2025

What do we mean by "meaning". And does it matter?

 In language teaching, we seem to be struggling with two different meanings of the word "meaning".

On the one hand we have "I know that tortue means tortoise". Where pupils are tested on their knowledge of the meaning of words. It's an approach that believes in regularly testing pupils' ability to parse sentences containing known words and known grammar, to cement memorisation and conceptualisation. The language is selected (by frequency of vocabulary) and sequenced (to exemplify step-by-step grammar concepts), so that the pupil's knowledge is built and reinforced.

On the other hand, we have the idea that language should be for pupils to express themselves and understand each other, to create and take part in communicating "meaning". This reminds me of the Spanish and French word for "to mean" - querer decir / vouloir dire - to want to say something.

Does it matter in language learning that our pupils learn to use their language to say things they want to say? Does it matter that what they read or hear has something to say, rather than just to practise and test their knowledge of language features?

We see this in GCSE and A Level listening and reading papers. What masquerades as a comprehension question often turns out to be asking pupils to show they can parse certain language features, even when they are not relevant to the purported question. At GCSE, we have seen questions like these. Things like "He didn't get on with his teachers" (which accurately characterises the understanding of the relationship) being marked as wrong. Because it didn't accurately parse the word "badly". Or at A Level, this question about what someone did one day. Answering the question (she went to see the castellers, she took a photo, she posted it online) is not rewarded. Because it doesn't show knowledge of the grammatical features it was appetising to her, she decided to... And these are not random rogue questions. This is a feature of how the examiners see meaning as simple demonstration of knowledge of the "meaning" of words and grammar. Not the meaning of what someone said.

It is also in Ofsted's guidance on curriculum design for languages. They insist that language be introduced in a strict sequence, based on exemplifying concepts, not based on teaching pupils to say things. The example they give is to teach pupils to talk about red dogs and red tortoises, and to avoid teaching green dogs and green tortoises until a future step. Because the adjective rouge is invariable for gender. Whereas vert would require knowledge of adjectival agreement if applied to a tortoise. All of which ignores the fact that if you are going to teach pets, pupils will want to talk about a range of pets in authentic colours. At this stage, pupils are making links from the language to the real world, rather than links and patterns internal to the language. This kind of real meaning is important to learners. And you wouldn't want to lose that!

But is it important to learning? Maybe that kind of meaning, with lots of pupils all trying to say random different things they aren't ready for yet, leads to them being given a collection of one-off things to say, that don't stay in long term memory and don't add up to coherent conceptualisation of the grammar of the language.

Here's an example that happened with my Spanish class last year. So they were the last class to take the old style GCSE. Back in Year 10, we had done some work on Shopping. Improvising answers in speaking, then writing them up. Most pupils did something that rehearsed the repertoire of opinions, reasons, tenses with a bit of conflict, conversation and disappointment thrown in.

But one pupil came up with this:



You can see it does contain the elements of the repertoire we practise using across topics. It has opinions, with reasons, direct speech, an element of conflict of opinions, use of imperfect and preterite ending in disappointment. You can see the underlying structures of the model aquarium story, both in terms of the repertoire of language and in how they are deployed.

But this pupil's answer was different to many others in the class. Because they were telling a true story. And a painfully personal one, with genuine and lasting disappointment.

Does this make any difference in terms of language learning? I am tempted to throw my hands up in the air and raise my eyebrows. Because of course we want to be equipping pupils to say things they actually want to say. Not teaching it as some kind of sudoku where they concoct answers to successfully fit all the required specified pieces into a pattern to show they can solve the puzzle. But even just in terms of long term memory and internalisation of language, does this pupil's work show something important?

Well. Here's what happened in Year 11. As we prepared for the exam, I did not let them look at their work from Year 10. They should have not only internalised the language, but also have been able to still deploy it.

The pupils who had cobbled together an answer to show they could use the language features, were basically starting from scratch. They had no memory of the work that they had done in Year 10.

Whereas the pupil who had written the story based on a true story that mattered to them personally, wrote this.



They were able to quickly and fluently reproduce several of their answers from Year 10 because what they had written was memorable. But if you look closely, it's not at all a case of word-by-word memorisation. It's actually a different version of the same story. And I am pretty sure if I listen back to their recorded GCSE speaking exam, they came up with another spontaneous version on the day of the exam.

Where does this leave the current GCSE speaking exam? AQA's marking of the Conversation by counting conjugated verbs seems to have gone entirely towards the parsing of language features, rather than the creation of meaning. As we see in this post, candidates who want to say things in genuine response to the examiner's question, do worse than pupils who trot out a list of three essentially meaningless verbs.

This exchange from my mock speaking exam, would score in the bottom band:

Have you ever had a problem abroad?

Yes. Once in a shop in Spain my mum forgot how to say 'coat'  in Spanish.

So...?

So we went to a different shop to buy a coat.

The pupil made the unforgivable mistakes of giving a genuine answer to the question, interacting with the examiner, and putting too much information into one clause.

Had they just said, I went to Spain. I went to a restaurant. I don't like to eat in a restaurant, then they would have been in the top band.

All my instincts and experience, such as the examples of the pupils in this post, are telling me that this is bad for language learning.




Saturday, 13 December 2025

AQA Conversation really not working

 At our latest ALL in the East meeting (scroll down for previous meetings on this ALL page), we discussed the implications of the AQA markscheme for the Conversation, in the light of conducting mock exams. The main area of concern was the way the need for information to be delivered in 3 clause chunks, each with a conjugated verb, did not make for a natural conversation with pupils interacting spontaneously with the teacher-examiner.

In the run-up to the mock, I had been confident that despite the confusion around AQA's marking of the conversation, it wouldn't in practice have too much effect. I imagined the stronger candidates would still do well. And the weaker candidates would still do less well.

I was wrong. Stronger candidates tended to put more information into a single clause. Stronger candidates tended to interact more with the examiner. Stronger candidates tended to give an answer that focused on saying something they wanted to say. All of these 3 things penalised them.

Here's an example.

Pupil: I always like to try to eat healthy food like fruit and vegetables.

Teacher: For example?

Pupil: For example yesterday I ate salad for lunch and a potato for tea.

Each of these answers is a single clause, putting this pupil in the bottom band. Their responses are both minimal. 

Had they said simply, I like to eat fruit and I love vegetables. I don't like salad, they would have scored in the top band, for an "extended" answer using three conjugated verbs.

One thing that is recommended in the AQA spec, is for the teacher-examiner to use short follow up prompts as I did in the example above, to elicit more information from the candidate. Things like, and...? so...? for example...? Why? And what if it rains...?

I have certainly used this in the previous GCSE, to interrupt and redirect a pupil who had a pre-learned answer to deliver, directing them away from stilted word by word regurgitation, and steering them into a more spontaneous interactive conversation. We also use it in teaching, for example with the conjunctions dice game described in this post, or in working with pupils explicitly on how an answer can develop logically and coherently.

After our discussions in the meeting, I contacted AQA to see if using these interjections to invite the pupil to develop their answer, would allow the two utterances in this example to count as developing the answer. So that my prompting, as recommended in the guidance on conduct of the exam, was allowing the pupil to show that they could continue their idea and spontaneously give further detail in interaction with the teacher-examiner. That's what I asked. And that is what I was hoping they might say.

Or, on the other hand, could it be AQA's decision that interacting like this actually penalises the pupil, because it means that what follows the "For example...?" doesn't count as developing the answer. It counts as a separate minimal answer.

That was indeed AQA's response. In this example from my mocks that I put to them, they determined that this is two minimal answers. This pupil would be in the bottom band.

This means several things.

Firstly it means that I am less likely to use these follow up prompts. Because a pupil who has already given information, could give some more. But may not have the required three items, if they already told me one or two things in their initial answer. Saying, "For example...?" may be trapping them into giving one further detail. So I will be pushed towards falling back on my list of starter questions, making the exam more of a predictable plod through a list of questions. Even though this is explicitly prohibited and undesirable.

Secondly, it means that I have to teach pupils to give answers in chunks of three conjugated verbs. This risks moving towards pre-prepared and over-rehearsed answers in order to achieve this. Again, undesirable.

Thirdly, it means that in conducting the exam, if a pupil only gives a single clause answer, or a two clause answer, I will have to sit and wait for them to add a third clause. Pupils will have to be trained to just say something. Without worrying if it is a logical development, or something they really want to say. As in the example above, just adding, I don't like salad would push you into the top band.

Fourth, and possibly worst of all, I have to train my best pupils to be more like the weakest. I have to train them NOT to talk naturally and put lots of information into a clause. I always like to try to eat healthy food like fruit or vegetables has to be replaced by minimal chunks of information each with a verb. I like cheese. I love cake. I don't like salad. 

Sorry. I just realised I accidentally and ironically used the word "minimal" to refer to what would be a top band "extended" answer. This markscheme is topsy-turvy.  In this universe, the clause that contained most information was the minimal one.

I must emphasise, that going into the mocks, I told myself that the marking wouldn't be too bad. But the experience did not, unfortunately, live up to that. There were numerous examples of strong candidates putting lots of genuine information into one clause. Here's an exchange with another pupil:

Have you ever had a problem abroad?

Err. Once, on holiday, in a shop in Spain, my mum forgot how to say "coat" in Spanish.

So...?

Oh. So we had to go to another shop to buy a coat.

This doesn't count as a pupil developing their answer in interaction with the teacher-examiner. This counts as two bottom band minimal answers.

I had to explain this to the pupil, using this example from their exam, to show them where they were losing marks. Their response, But that's not how a conversation works. 

Well. It is now.



Some limited food news:






Sunday, 23 November 2025

Is there Salvation for the GCSE Conversation?

 In a previous post, we looked at how this definition of "good development" is a joke.

AQA exemplification of "amount of information" in the new GCSE spec


Choosing the example "I don't like social media because it is boring" as the definition of "good development" is knowingly taking exactly the sort of answer we don't accept from pupils and holding it up as desirable at GCSE. The post recognises it is doing the important job of signaling that memorising long fancy answers is not required. But that job should have been done by the parameters of the task, not by the markscheme.

So we've ended up with a markscheme that defines as "good development" something which patently is not an example of good development.

The exam board have rescinded the 17 question guidance for marking. Although I have yet to see any information about its promulgation or withdrawal on any AQA site. But they cannot rescind the markscheme. Because it's in the specification.

Let me give you one example of what this would mean. Out of these two answers, which would score higher?

¿Tienes un restaurante favorito donde te gusta ir para celebrar una ocasión especial?

a. Me gusta Ed's diner. Es grande y es divertido.

b. Pues, en el pasado, siempre me gustaba ir a festejar el cumpleaños de mi hermana menor en un restaurante pequeño cerca de mi casa.

The answer is of course, a.

Answer a. has three clauses and is not just well developed. It is an extended answer. Answer b. is a minimal response with just one clause with pieces of information added on. A minimal response.

Or try this one:

¿Te gustan los animales?

a. Sí. Tengo un gato. Mi gato es grande. Mi gato es negro.

b. Sí, y algún día espero ser veterinario en mi propio consultorio.

Answer a. again is an extended answer. Two bands above answer b. Which is only a minimal answer.

And of course, in both cases, the pupil attempting answer b. would be more likely to fall into error. So we should strongly advise pupils against this kind of answer which does not score well for development and could also lose marks for accuracy.

The exam board when they withdrew the 17 question guidance were shocked that schools would "game the exam" by training pupils to give 17 accurate 3 clause answers. This is the problem. We have an exam board setting the goal posts. Then bemused that schools aim for them.

We have already seen this in the Photo Card where the marking guidance actively disincentivises good teaching. And in the questions following the Read Aloud task, where the best tactic is to say 3 random things linked to the topic of the question.

The problem is that there is no credit for true development. For coherence. For the three clauses to be linked. Or convincing or personal.

It has come down to saying three accurate clauses. Only got 2 clauses? Train your pupils to throw in a third formulaic add on:

Sí, y algún día espero ser veterinario en mi propio consultorio. Si puedo.

Sí, y algún día espero ser veterinario en mi propio consultorio. Creo yo.

Sí, y algún día espero ser veterinario en mi propio consultorio. Me gusta la idea.

Sí, y algún día espero ser veterinario en mi propio consultorio. ¿Por qué no?

That would lift those minimal answers to "good development". Although still not good enough to be "extended".

It's a markscheme that rewards inanity.

It's a markscheme that is inane.


Is there anything we can do?

Yes. There is. But it's going to take a bit of sophistry and exegesis. Because the exemplification is written into the specification and will have to be interpreted in the same way jurists look at the intention of the framers of a sacred text like the American Constitution.

This would mean making a nice distinction with the exemplification in the specification: it is there to exemplify. NOT to define. So although an inane 2 clause answer would qualify as "well developed", a coherent answer should qualify as "better developed" than an inane response. Over a 5 minute conversation, a candidate who can be personal and coherent, should see their responses rewarded over the candidate whose responses come in 2 clause bundles, but where the information is basic and the purported links are inane.

And a pupil whose responses over a 5 minute conversation take answers and extend them logically, coherently, with convincing detail and examples (even if it means making a few more mistakes) could be rewarded more highly than responses made up of random assertions bundled into a 3 clause answer but with no real link, lacking anything that would actually be worthy of the name "development".

The exam board could stop counting clauses as the Church stopped counting angels dancing on the head of a pin. They could allow the examiners to take into account whether the information in the answer was coherent, personal, linked, thoughtful, interesting. An answer that was going somewhere rather than an answer that is going nowhere. Because although the overall criterion is "amount of information", it is broken down into the idea of "good development". And coherence is surely a factor to be considered when looking at "good development".

The exam board have made their point with the exemplification that long fancy memorised answers aren't wanted. The exemplification has dealt with what is NOT wanted. It's done its job. But it mustn't define what pupils must do in the exam. The exam board clearly confessed this when they bemoaned teachers "gaming" the guidance. So they need to not be pinned down by it.

The exemplification applies to the micro level of utterances. What they hadn't thought through was what their goal posts mean for a 5 minute conversation. Clearly, "because it is boring" is not going to see a pupil through 5 minutes.  This was the purpose of the apocryphal 17 Questions. They were an attempt to extend the micro exemplification to the whole conversation. An interpretation which as it was not in the seminal text, after a few days in limbo, could be withdrawn. Especially as it tried to do away with the timings, which although "recommended", were in the framers' original text.

So let's pin our theses to AQA's door. The conversation should be around 5 and a half minutes, as stated in the specification. The conversation should not be rote memorised fancy answers, as exemplified in the specification, nor conducted via a list of pre-ordained questions. The criteria for marking are for the amount of information, including how well this is developed and extended. The specification exemplars take us only so far with this, exemplifying how fancy long answers are not wanted. But they don't show us what a 5 minute conversation of developed or extended answers should look like. A conversation with give and take with the examiner, genuine questions about the pupil and their ideas. Some of these will be developed in more detail than others, and a good candidate will have genuine and convincing development, with a level of coherence and exploration. The intention of the framers was not to reward counting inane 3 clause clusters of meaningless language, deliberately kept simplistic but accurate.

And the whole catechism of I have a cat It is big It is black thing brings the GCSE into unsustainable disrepute when Year 9 are attempting things like this (below). We cannot have a GCSE that rewards inane 3 clause bundles over genuine development and whose cardinal role is to hold back pupils' expression and bind them to arcane formulaic responses. Don't know where this religious metaphor came from. But it does feel as if we've gone back a few hundred years to something on the verge of collapse! Read this. It will cheer you up:




Friday, 24 October 2025

Marking the Conversation at GCSE (AQA) -- Not funny

 There are two meanings to the word "a joke". One is something deliberately risible, to make those in the know laugh. The other meaning is that something is an object of derision. Sometimes both meanings coincide. As in "this definition of Good Development" is a joke.



This is from the new specification for GCSE in French, German and Spanish from AQA. And anyone who knows anything about teaching languages will spot the joke. I like to eat carrots because it is interesting is the kind of desperate answer we don't accept from pupils. Our teaching consists of trying to move them on from this kind of answer. There are teachers who ban it is interesting because it's seen as the last resort of pupils who can think of nothing to say and have turned up to an exam utterly unprepared.

Yet here we have "I don't like social media because it's boring" as the very definition of "Good Development."

What is it trying to signal? Two clauses is what is considered a well developed answer. And three clauses counts as an extended response. It is signalling that long pre-learned answers are not required in order to perform well. Its message is all about what is NOT wanted, rather than thinking through what might be required.

In exactly the same way, it is signalling that deliberately fancy expressions thrown in to wow the examiner, are not wanted. In terms of amount of information, it is boring is no worse than a pre-learned autant que je sache. Unfortunately it also means it is boring is just as good as a thoughtful because it takes up too much of my time.

We are looking at very complex arguments being played out, about the nature of the level of difficulty in languages. There are lots more ironic jokes at play here. Like the fact that the autant que je sache was a favourite of teachers most strongly associated with the "Knowledge Curriculum". Supposedly to show how well their pupils could do if taught "properly". When all they were doing was showing that the supposed hierarchy of difficulty is bogus. Just as using je vais used to be a whole National Curriculum level higher than je dois because je vais is the future.

Revenons à nos moutons:



This really should have had no place in the Specification. It too clearly smacks of in-jokes and point scoring in the spat between exam boards and the GCSE panel in the creation of the new exam. It is too focused on what is NOT wanted (rote answers and fancy expressions), rather than thinking through what IS wanted. As such, it could have been guidance on the conduct of the exam rather than the marking criteria. In fact, it is strongly emphasised everywhere that having a narrow list of questions that all pupils are going to be asked, is malpractice.

Given the parameters of the exam, what might be wanted? Firstly, it was advertised as a conversation of between 4 and a half and 5 and a half minutes. On just one theme. So twice as long as the previous GCSE, which had a similar length conversation, but on two themes. Clearly, I don't like social media because it is boring is not going to see you through 5 minutes. Five minutes of such short answers would require about 30 questions in a relentless back and forth. I don't think I have 30 questions on most of the topics in these themes. And I don't think pupils have 30 variations on I don't like... because it is... So clearly, while the exemplars in the specification served their tangential purpose of sending a strong message as to what was NOT wanted, we had to figure out for ourselves what was wanted.

And it seemed reasonable to think that if we teach pupils to develop their answers spontaneously, and to respond to prompts from the teacher which would interrupt any pre-planned answer, then this would be rewarded.

The idea that a pupil who extended their answers spontaneously would be penalised is ridiculous. Or that a teacher who interjects Why? For example? And so? And then? would be penalising their pupils, is also ridiculous.

That is what happened with this week's guidance, now apparently hastily withdrawn. Although I have yet to find anything official from AQA either presenting the guidance or withdrawing it.


A pupil who extended their answers spontaneously, would not necessarily reach the 17 questions total. A teacher who interrupts to prompt or redirect a pupil, pushing them to spontaneously develop an answer, would fragment the 3 clauses into a series of "minimal" answers.

Thank goodness AQA did publish the guidance. Imagine if it was being marked this way. And pupils who spoke and interacted spontaneously were marked down for answering fewer questions or for having fragmented clauses responding to the examiner's interjections. And we wouldn't know why it was happening.

This is the key thing that AQA missed. They think they have to quantify "amount of information." And they think it's only fair to publish it. What they fail to realise is that this then determines what answers we have to train pupils to give. Instead of evaluating what pupils say, the exam board are determining what they have to say. We have to train them to answer 17 questions with 3 clauses (some may fall short of 3 clauses). And because everyone will be doing this, it throws the emphasis onto the other criteria: accuracy and variety.

17 answers carefully box ticked, carefully accurate, deliberately including variety. This is recreating the exact conditions for fancy rote-learned pre-prepared answers. The very thing they were trying to get away from.

NOT Funny.


Sunday, 19 October 2025

Part 3 of A Spanner in the Works. AQA Guidance for Marking the Conversation.

I don't know where to find this information officially from AQA, but I am hearing there is going to be a re-think. I don't know what will replace it. Hopefully a 5 minute conversation as specified in the spec. Note also that the "I don't like social media because it is boring" exemplification of "good development" is in the spec. I think the problem lies in the way the guidance set hoops to jump through that were going to determine/distort/limit pupil performance instead of assessing their performance. Same goes for the photocard guidance.


 Now we know that pupils will NOT have to speak for 5 minutes on one theme in the Conversation, what should their answers look like?

We have to interrogate the exemplars from the specification. They are likely to creak under this exercise, as they were originally intended to be examples. But now they are being forced into the role of definitions of "extended" and "good".

Here they are:



You can see that they have been chosen to exemplify that pupils will NOT need to have extended answers or fancy language. The choices are deliberately, even knowingly at the level which previously we have aimed to move pupils away from. Now "I don't like social media because it is boring"  is the definition of Good development of an answer. The GCSE panel wanted to remove the Conversation because it led to rote delivery of long answers containing fancy language. The exam boards put the Conversation back in, and are signalling that it will not reward long memorised answers and deliberately inserted fancy language.

Of course, this also avoids rewarding pupils who can spontaneously develop answers and naturally use sophisticated language as part of a complex narration.

Looking closer, we can see that "amount of information" is being interpreted in a weird grammatical way. The exemplar for "extended response" includes three clauses.

So I think we are to assume that an answer delivering more information, but all in one clause, would not count as extended.

I love to go to the cinema in Norwich with my friends or family but not on my own to see an action film or another good film most weekends in a cinema with a big screen and a great sound system.

This example only has one conjugated verb I love... And although it contains a greater "amount of information" than the exemplar, we would have to count it as "minimal development". "Minimal development" of the "amount of information".

So verbs are crucial. Not the "amount of information".

What about the fact that the exemplar for "extended response" contains three different verbs. This is all we have to go on. So are we to assume this is also a requirement? What if I repeat the same verb?

I love to go to the cinema and I love to go to Norwich. I love to go with my friends or my family, but not on my own. I love to see an action film but I also love other sorts of film and I love to go most weekends to a cinema with a big screen and I love a cinema with a great sound system.

Is that now an "extended response"? Or is it disqualified because it is the same as the earlier "minimal response" with the verb repeated?

This makes a difference. One of these would be "good development" and the other one would be "an extended response"? Or not?

I like to play tennis because it is fun and exciting.

I like to play tennis because it is fun and it is exciting.

And the overriding question remains. Is "I go to the cinema and I watch films. I love films" really what is required for a grade 9? If so, we have got an awful lot of thinking to do about what we are teaching.

Of course, this exemplification was there all along, and isn't changed by the new 17 question guidance.



What is changed, is the dropping of the requirement to talk for between 4 and a half and five and a half minutes on just one theme. This has been replaced by the requirement to give short simple accurate answers with 3 verbs for 17 questions (some of them can fall short of 3 clauses). 

What also has changed, is that everyone will make sure that pupils can tick this box, so the Conversation is now the equivalent of Controlled Assessment. Planned and prepared against a tickbox that everyone meets, so effectively irrelevant in its effect on the grade. And remember, AQA have already done the same thing to the Photo Card. We are right back in the bad old days of 2016 and the Baukham report, with the wrong answer to the wrong problem.

This is exactly what this new exam was meant to avoid. And exactly what I feared it would do right from the start. An exam explicitly designed to change the way we teach. Ends up ruining language teaching again.

Part 2 of A Spanner in the Works. The AQA guidance on marking the Conversation.

I don't know where to find this information officially from AQA, but I am hearing there is going to be a re-think. I don't know what will replace it. Hopefully a 5 minute conversation as specified in the spec. Note also that the "I don't like social media because it is boring" exemplification of "good development" is in the spec. I think the problem lies in the way the guidance set hoops to jump through that were going to determine/distort/limit pupil performance instead of assessing their performance. Same goes for the photocard guidance.


 This is going to make a lot more sense if you have read Part 1 of how AQA have thrown a Spanner in the Works for how my department teach the Conversation part of the Speaking Exam.

A huge amount of thinking, collaboration and planning has gone into teaching this new GCSE, and in particular, the new Speaking Exam. Our KS3 is designed to teach pupils how to use a growing repertoire of language across topics, with an emphasis on not just learning more language, but on learning how to use it. Pupils work on thinking up what to say, how to make it personal, coherent, interesting and developed.

We start Year 10 with Module 0, showing them how their KS3 French already enables them to tackle the role play, unexpected questions and some conversation questions. In Year 10, we build up language, carefully transferring it across topics, and making sure pupils see how they can deploy it in the exam. I feel we are doing our best to put in place best practice, in dialogue with the ideas behind the new GCSE.

Last year we had the opportunity for Year 10 to do a Speaking Exam. Rather than an exam, it was more of a run-through, to familiarise them and us with the elements and demands of the exam. They had the Role Play, Read Aloud and Photo Cards in advance, so they could turn up and do the Exam in 10 minutes without the need for invigilators or prep time.

What did we discover? Not to be afraid of the exam. The Role Play - short answers containing a verb. The Read Aloud - stunning. The Unexpected Questions - a bit of explaining that you have to guess what you think the question is, say something related, then say a couple more random things that might be related. The Photo Card - say there is or is 8 times for each picture, without risking trying to say anything else. (Post here on the negative effect of the AQA marking guidance on the Photo Card.)

That left the Conversation. We had NOT prepared answers to a list of questions. But pupils knew they would get questions that they could answer using their repertoire of opinions, reasons and tenses. They knew that we would prompt them for more using and, so, for example, why...? We didn't stick to one theme, but used it as an opportunity for them to show off their French across all different topics.

So what we discovered was that their French was up to the task. But the demands (at the end of all the other tasks) and the cognitive load of thinking up what to say and how to say it in French, was too much. After a while their answers ran out of ideas and became repetitive, or we had to switch topics to keep them going. Or we said "Well done" and stopped before the full five and a half minutes.

This then, was our focus for going into Year 11. Tweaks to the Scheme of Work. The Department Plan. Inset in September and department meetings. Individual teachers' Performance Management Targets. All in a coherent focus on managing the balance between having ideas prepared, but not memorising answers. Managing the balance between preparation of ideas, and spontaneous improvisation of the French. So that the pupils could talk for 5 minutes on just one theme (double the time compared to the previous GCSE) without having memorised answers. What is the best way to teach pupils to talk for 5 minutes? Prepared answers is not the best way. Because the more prepared the answer, the quicker it is to deliver and you end up having to learn ridiculous amounts to fill the time. Better to have a genuine conversation, with some ideas prepared, but making up the answer in response to the examiner's questions. A balance of prepared ideas, but spontaneous French.

So we do have a booklet of possible starter questions for the conversation. And pupils are challenged to answer the questions spontaneously in speech and to plan their ideas in writing. They do not memorise their answers, but they do have their ideas ready. We have been careful to mix the questions up across topics so the pupils are deploying the same repertoire irrespective of theme, and there is definitely no set list or order of questions. When we come back to practising questions, we don't let them look at their planned answers - they have to improvise a new answer based on the ideas they had come up with, just as we did with the previous GCSE. We work on creating answers in layers. So they can give an immediate response. Then back it up with reasons or if sentences or examples in past or future. They know that the teacher will prompt for this kind of extended detail with follow-up questions such as et... ?  alors...? par exemple...? Pourquoi ? The conjunctions dice game in the second half of this post has featured heavily in getting them to extend answers and respond to being pushed in different directions by the throw of the dice. We have worked hard on the different directions a story can go in, with one idea leading to another, so you don't get stuck thinking up what to say next, as in the mouse and the cookie. We have even turned the order of Year 9 units around, to start with developing ideas into stories

A huge collaborative and joined-up effort of the entire department, based on taking stock from the Y10 Speaking, and gearing up for the mock speaking next month.

Then AQA put out their guidance and it's hard not to feel as if the rug has been pulled out from under our feet.

It's not the 17 questions. They were always going to have to define "amount of information." Although defining it means everyone will make sure they meet it. With a planned and monitored set of questions. And because everyone meets it, the emphasis that swings the grade will fall on the other criteria: accuracy. And it's not the fact that AQA "extended answers" mean very short basic answers. Although a requirement for a set number of 3 clause accurate answers is perhaps best met through planned, prepared, rote learned answers. And it's not even the fact that redirecting prompts like and... so... for example... why?  would now invalidate the pupils' responses, by breaking up the 3 clauses.

Well, yes. It is all that. But the main thing is the removal of the timings. If you no longer have to fill 5 minutes (remember in the old GCSE, there were minimum times on each theme), then you no longer need to have a repertoire you can riff on confidently and indefinitely.

It's not hopeless. We just have to adapt. First we have to audit our questions for each theme. Are there 17 questions? If we include short prompts as further questions, can we do this without disqualifying the pupils' answer from reaching 3 clauses? How many more questions do we need so as to avoid repeating the same questions? How do we allow pupils to show what they can do in terms of inventive longer answers, but still get through 17 questions? Is the AQA exemplar answer "I go to the cinema and we watch action films. I love action films" really going to get a grade 9? If what our pupils can do is superfluous to requirements, what elements of the exam should we have been focusing on? And if the thing that is going to swing the exam is now the accuracy marks, does this mean our pupils now should learn scripted answers off by heart?

Part 1 of: A Spanner in the Works? The new AQA guidance for marking the Conversation.

I don't know where to find this information officially from AQA, but I am hearing there is going to be a re-think. I don't know what will replace it. Hopefully a 5 minute conversation as specified in the spec. Note also that the "I don't like social media because it is boring" exemplification of "good development" is in the spec. I think the problem lies in the way the guidance set hoops to jump through that were going to determine/distort/limit pupil performance instead of assessing their performance. Same goes for the photocard guidance.


I don't know where to start with this post. Or where it is going. But I think I know what the key problem is. And it's NOT the 17 questions. Do I know what to do about it? I'm working it out. But it may take more than one post...

This is going to be big. It wasn't supposed to be. It was supposed to quietly define "amount of information" in the Conversation part of the new AQA GCSE Speaking Exam. 



It was even supposed to disincentivise rote learning of scripted answers. I'm not sure how. Because even my immediate reaction to this is to check how many questions I have for each theme, and how many of them pupils would have extended answers for. And I live and breathe spontaneous answers in my teaching.

It's worth mentioning straight away that an AQA "extended answer" is not what we understand by an extended answer. For our Year 9s, working on extended answers means things like the examples below, moving from a random stream of French, to coherent answers, to past tense stories with cheats, to telling stories. These are written examples, but we spend much more time working on speaking and spontaneity, with strategies like Being Ben or telling stories round the class to develop pupils' ability to think what to say next.




No. For AQA, an extended answer looks like this:



Three clauses of particularly uninspiring language, containing an opinion and a conjugated verb. The example given for "Good Development" because it is boring seems a deliberately knowing and sarcastic inclusion. Because this is clearly a response to the initial attempt to do away with fancy pre-learned answers by the GCSE panel when they originally proposed getting rid of the Conversation completely.

Both the GCSE panel and the exam board in their different ways are trying to get rid of pre-learned scripted rote answers.

But I can't see how this is not going to mean a return to rote learned answers. The ticking off of a specified number of answers means teachers having to carefully plan and keep track. Everyone will be making sure they hit the magic number. This then means that what differentiates one pupil's performance from another will be the criteria for Accuracy. And the need to deliver a set number of highly accurate answers will lead to... rote learned answers.

Is the number of questions so prohibitively high that no-one would dream of learning that amount? 17 three clause answers for each of three themes. With lots of cross-over where a question could be used in more than one of the themes. This is prime "learn by rote" territory.

I actually don't think the 17 answers is the problem. They were always going to have to define "amount of information". And I already suspected that the reduction in topic content was going to shift the balance back to pre-learned answers.

The actual problem is the ditching of the times. Nominally, the Conversation is supposed to last between four and a half and five and a half minutes. A long time to talk on just one theme. The old GCSE Conversation was this long over two themes. So even though I knew that 3 clauses was all that was required for an "extended" answer, I had spotted that filling 5 minutes was going to need pupils to have more to say. Our pupils work on developing answers spontaneously, responding to teacher prompts such as et alors... ? par exemple... ? Pourquoi ? I will look at exactly where we are up to in terms of being able to riff on these prompts to fill 5 minutes in a later post. But all that may now have to go. Perhaps we were fooling ourselves all along that it was what was wanted.

And the five minutes also could have been a disincentive to learn and deliver pre-learned answers. A pre-learned answer, delivered fluently takes up less time than an improvised answer. Like Achilles chasing the tortoise, the more you fall back on pre-learned answers, the more you find you have to say.

So we were pleased to convince ourselves that improvised answers with the teacher intervening to prompt for more detail, was the best way to fill five minutes.

That's what's gone. It's not 5 minutes anymore. We're left with the requirement to give 17 short but accurate answers. How does this not tip the balance back towards having prepared answers?

Incidentally, this is exactly the same mistake that AQA have made with their interpretation of the specification markscheme for the photo card. With similarly negative consequences for teaching and learning, as I found in this post.

What about the idea of the examiner prompting the pupil for more detail, to push the pupils to extend and develop? Things like et alors... ? pourquoi ? par exemple...? all count as questions, so would make it easy to get to the 17 number. But what they also do is fragment the "extended answer" into single clauses. So instead of demonstrating the pupil's ability to extend spontaneously, they now disqualify the answers from counting as "extended" as each response may now fail to meet the 3 clause threshold.

I have plenty more to say about exactly where we are and what to do next. But that's enough for now. It's NOT the 17 questions. It's the ditching of the 5 minutes. That changes everything.


I know the exam board had to define "amount of information" and don't want to see rote learned answers. They will have tried out how the marking works out on sample recordings of conversation. Have they done the opposite? Have they tried out what sort of conversation you get when you specify 17 short accurate answers? I hope they are right that this means we are still better off teaching pupils to extend their answers spontaneously. I'll explore that in another post...


Thursday, 9 October 2025

Recent Posts on the Way Forward for Languages.

Here's selection of recent posts on the two systemic problems facing MFL, why they are so bad, and what we could do about it.


 The impact of unfair grading:

https://whoteacheslanguages.blogspot.com/2025/08/unfair-grading-and-its-impact-in.html

Could the success of the languages for all pilot offer hope for being able to offer mainstream language learning post 16? https://whoteacheslanguages.blogspot.com/2025/07/hope-is-in-air.html

The two things that need sorting to allow MFL to flourish. https://whoteacheslanguages.blogspot.com/2025/05/can-we-sort-out-languages-in-english.html


A Level Spanish 2025. How they make the exam too hard for even the tiny minority who take it. https://whoteacheslanguages.blogspot.com/2025/06/really-cool-translation-challenge.html

How bad is the reformed A Level? https://whoteacheslanguages.blogspot.com/2025/05/the-problem-with-this-level.html

How they make the A Level about obscure grammar even when it isn't relevant to answering the purported question. https://whoteacheslanguages.blogspot.com/2021/12/why-i-dont-call-it-summary-question.html 

And worth getting it straight from the horse's mouth. Look at the contempt for language learning in this submission from the people who "reformed" A Level: https://alevelcontent.wordpress.com/wp-content/uploads/2014/10/alcab-rationale-for-english-essay.pdf

The ugly reason why things are so bad for MFL post 16. https://whoteacheslanguages.blogspot.com/2023/10/colonial-curriculum.html

One easy thing to put right: https://whoteacheslanguages.blogspot.com/2025/01/one-thing-that-costs-nothing-which.html

Sunday, 5 October 2025

Am I about to come unstuck? - How much can you rely on a metaphor for learning?

 How much can you rely on a metaphor for learning to guide your practice? All models of learning are metaphors. Starting with the popular "storage and retrieval" model. This seems a particularly circular metaphor, based on comparing the brain to computer memory, which in turn is a metaphor based on human memory. Metaphors for the brain often go hand in hand with current technology. This post looks at how previous models included cogs, hydraulics, cables... And of course in languages we have been presented with the metaphor of pillars which I examined in this post, showing how the metaphor revealed more than I expected: carefully constructed classically impressive pillars of free-standing stand-alone grammar, vocabulary and phonics, was deliberately an act of "folly".

You will know that my favourite metaphor for language learning is the snowball.

A few years ago, the day after a light snowfall, I was walking round the school with a pupil who had been sent out of his French class, to calm things down. He was telling me he didn't mind French lessons, but he just didn't know any French he could use. We stopped and I asked him where all the snow from yesterday had gone. He said, "It all melted, Sir." I asked him, "And where's your French?"

He was quick on the uptake (he is now a vicar, after a time in the police force), and said, "Oooh. Nice metaphor, Sir." He had been there in lessons while all the French was happening, but he hadn't managed to grab hold of any, roll it into a snowball, and stop it from melting.

This is the first use of the metaphor. To warn pupils that their French will melt. That it's their responsibility to grab hold of some and make it theirs. To roll it into a ball and stop it melting. And that more and more French will stick to it.

Then there's the message to teachers. We need to spend time making sure that pupils have a core of sticky French. That they are making it theirs and not letting it melt. It's important that our curriculum is designed so that we develop this core of language, using the same language over and over. And it's important not to design a syllabus where everything is ticked off once. The metaphor tells us that an even coverage of language will melt. What we want is a snowball of language that rolls on from topic to topic, getting bigger and bigger around a sticky core.

This post examines how to design a syllabus where new language adds a layer of accretion to the snowball. It starts from how to add new language to the pupils' existing language. Not by chopping up the language into bitesize chunks of omelette and hoping the pupils can make their own omelette out of it. Mixed metaphor alert. But cooking an omelette out of raw egg is the equivalent of the snowball approach. Chopping up the cold dead omelette is the equivalent of the even coverage approach.

So far so good. But how far does the snowball approach get me with the new GCSE?

For the speaking and writing, it's fundamental to our vision. We use it explicitly in our resources to show pupils how to tackle the demands of the speaking exam, whatever the topic. 

But this new GCSE has a huge gulf between the language needed for the Speaking and Writing exams and the vocabulary list for the Listening and Reading exams. The vocabulary list is not designed to be based on the language needed for the topics or for the tasks of the exam. For example when you get to the Jobs unit, there are fewer than 10 jobs words in the list. The topic of Jobs is just another arena to meet the non topic vocabulary. It's the even coverage idea reimagined. This time it's meant to be such intense snowfall that layer upon layer of French has fallen, before the previous layer has had time to melt. It risks leaving my pupils with their pathetic snowballs they were so proud of, lost in a snowy wastes landscape that stretches off to the horizon. Or at least that's what it's starting to feel like.

But is anyone achieving this deep layer upon layer of snow? Does it mean having to do listening and reading activities from the textbook totally by the book, missing out nothing because without the intensity of repeated snowfall, melting will happen? To achieve this, we would have to abandon the lessons focused on getting pupils better at using their snowball of French, all the lessons on practising speaking, thinking what to say next, getting really good at using their snowball. Do the textbooks actually deliver the meticulous coverage and re-coverage required for this permanent even coverage not to melt?

Anyway, for me and my Year 11s, who only started Spanish in Year 10, it's too late. The snowball approach is doing its job for the speaking and writing exams. Are we scared of the vocabulary for the listening and reading exams? What I am hoping is that having such a huge snowball will take care of it. Now their snowball is so big with all the things they can say or write, surely the little rocks, bits of grass, sticks, abandoned carrots and coal from other people's melted snowpersons... it can all stick to their snowball as they roll it round and round...? Can it?

This is the idea. That by making sure the pupils have their own snowball which has got bigger and bigger, more and more Spanish will stick to it. Including words that aren't nicely adding a natural layer, but which are odd words that don't seem to stick, but get swept up along with the snow.

Can I rely on this metaphor to get me and my pupils through the exam? Or will the whole thing come unstuck?

Saturday, 20 September 2025

Is chatgpt the answer to the GCSE vocabulary homework problem?

 In a previous post, I declared that I wasn't going to let the new GCSE Vocabulary List worry me. We didn't use to worry about the vocabulary list in the old GCSE. We didn't even look at it. We should just be able to get on with teaching the pupils Spanish, and they will pick up the words they need. The textbook should cover the words. And how different can it be? And maybe nearer the exam we can give pupils bits of the vocab list to learn or something.

Well. It is different. We are now doing the topic of jobs. And the old materials with vet, builder, cabin-crew, child-minder and all the other jobs, are still useful, but not for those words. There are only about 7 jobs on the vocabulary list. From memory, doctor, hairdresser, teacher, lawyer, celebrity and a couple of others. So when we look at texts in the jobs unit in the book, we are not looking at the jobs words. We are explicitly looking at how the texts on jobs are a vehicle for encountering and re-encountering non-topic vocabulary. Things like have just, started to, chose to, managed to, succeeded in.

One problem is that this no longer bears any resemblance to what the pupils are saying in their speaking and writing, where they will be talking about how they want to be a vet because I would love to work with animals and if I can earn lots of money, I would like to travel the world and see wild animals in different countries... The texts in the book are not modelling the sort of language pupils are looking to use. And the language in the Listening and Reading exam is radically different from the sort of language pupils are learning to use in order to do the tasks required in the Speaking and Listening exam. 

This problem of the split between the language needed for the Listening / Reading compared to the language needed for the Speaking / Writing, was one of the main issues with the old GCSE which I wrote about here. This new GCSE has really pushed the wedge further. And at the moment, I feel confident teaching pupils for the Speaking and Writing exam. But I have no idea if I am covering what they need for the Listening and Reading exams.

Time to really get to grips with that Vocabulary List.

The problem is, if we are not talking about topics and topic words, how do we break up the vocabulary list for pupils to revise week by week?

Can chatgpt be the saviour here? If we give it words from the list, can I give it simple administrative jobs to do? Things like tidy up the formatting or categorise them? Can it do more sophisticated jobs like create short phrases that would be more memorable than a list of single words?

AQA already supply a spreadsheet of the words so you can sort for nouns, verbs, adjectives etc. It has a Spanish and English column. But for verbs, the English column is a mess. It gives different forms of the word: eat, eating, to eat which are superfluous. For example if I were to paste this into a Quizlet set, it would make the tasks undoable, with the pupils having to type in all 3 forms verbatim. Can chatgpt tidy this up?

Then what if I only want verbs for the topic of Jobs and Education. But as we saw, also including any non topic verbs that could be used in this context. So what I actually want is all the verbs on the list, minus the ones that obviously belong to another topic. Can chatgpt do this?

What about nouns? Maybe nouns are easier to learn from a list than verbs. What if I gave chatgpt all the nouns and asked it to sort them into topics. Being very strict with it and saying to make sure to include as many as possible in topics, and to give me a list of all the words it hadn't managed to put in a topic. Then I could ask it to put each word into a short phrase, ready to make into Quizlet cards in Spanish and English. Could chatgpt do this?

How did it do?

First of all, I noticed that it has developed the annoying habit of asking ridiculous questions to check how you want it to proceed. I strongly suspect this is a deliberate tactic to use up your limit of free questions on the more powerful version.

Then, yes, it can tidy up a list. It can categorise words by topic. It can create short phrases for each word.

BUT...

But when it had finished, I spot checked its list against the original list of words I had given it. The words on my list that I checked, had not made it onto the final chatgpt list. And there were words on the chatgpt list that were not on the original AQA list. And, yes, I was clear with it to not add words or remove words. But to no avail. It can't stick to that kind of instruction. Its job and purpose is to make stuff up.

I did give the list of verbs to pupils. For them to tick off the ones they knew, so I could gauge the size of the task ahead of us. And I asked them to highlight ones they thought would be particularly tricky, and to practise conjugating them in sentences to deploy. But they noticed the phrases in English and Spanish had words missing or were odd. And I had to admit I had used chatgpt. Their reaction was one of horror. AI really does not have a good reputation with young people. 

Famously, when it comes to schools and teachers, even "good enough" is not good enough. AI is definitely NOT good enough to do even simple administrative jobs in support of teachers. So why would anyone suggest we use it?

Tuesday, 19 August 2025

Look on my works, ye mighty...

 Pillars. The ubiquitous metaphor of the last few years in Language Learning. Supplanting the four "skills" of Listening, Speaking, Reading, and Writing, it announces Grammar, Vocabulary and Phonics as central to language teaching and learning.

The days of thinking that a good language lesson would automatically be made up of a little bit of listening, a bit of speaking, a bit of reading, and finally maybe time for a little bit of writing, are long gone. With the presumption that it would somehow add up to language learning if left to ferment for long enough. And that pupils would naturally lap up speaking and listening but need more time to stomach the written form of the language. 

This was left behind in the 90s or perhaps the early 2000s, when it evolved into the idea of developing the skills explicitly rather than by osmosis. With as much overlap between speaking and writing as possible. And using reading and listening to model what pupils would speak and write. So whole lessons would be devoted to developing writing or working on the fluency of speaking. It brought with it the realisation that working on what to say, how to develop it and keep it coherent, was fundamental to the development of the "skills" and for language learning.

This is what was to be swept away by the new metaphor of the three pillars. The word "skills" was replaced by "modalities" to reflect the idea that they were just different ways in which the language could be met or or practised. With the knowledge of the language central, not the development of pupils' ability to deploy it.

The new paradigm was firmly centred on knowledge of the language.

How powerful is the metaphor of the pillars itself in driving this shift in thinking? What does it reveal?

Firstly, it is very deliberate that they are separate pillars.

The logic of the grammatical progression should depend on the logical step by step building up of the grammatical system. Not at the mercy of the requirements of a certain topic or transactional situation. It is a stand alone pillar, carefully built up to be free-standing.

The vocabulary pillar again, would not depend on topics. It would be carefully designed on its own terms, selected from the corpus of high frequency words. Not the words needed to talk about a certain topic or conduct a given task. With the same words being carefully built up in repeating patterns so that they are met again and again. A free standing pillar with its own carefully constructed logic.

And the same for phonics. Supposedly a third independent pillar with the key sound-spelling features met in a planned and ordered way. Even if this pillar is a bit shorter and stumpier than the other two.

So very deliberately pillars. Not just a decorative image - as stipulated by cognitive science. A visual has to be a useful diagram, not a distraction.

Some of us didn't immediately realise this.

The ALL Language World 2025 conference, with its theme of a "rich tapestry", had people talking about weaving skills and topics through the pillars. Or twisting them into a thread. One does not weave pillars. Or twist them. Pillars are pillars. Strong, linear, well-constructed, free-standing, and unencumbered.

Learning in languages was to be unencumbered by anything incidental which might destabilise the pillars or anything rich or complex which might compromise their classical simplicity.


What you can do with pillars, eventually, is put on a roof. A lintel sits across the pillars, connecting them, and giving stability and turning them into a building both decorative and useful.

So why is this not part of the metaphor? Again, it is deliberate. The Ofsted research review was written in the philosophy of "novice" and "expert". According to this idea (common to Ofsted thinking about all subjects, not just languages), the carefully selected block by block knowledge had to be in place first. Just the pillars. Too soon for a roof.

Novice pupils (up to and including all but the very highest grades at GCSE), were not ready or able to use the knowledge for communication or comprehension. Only in order to meet, practise and be tested on the knowledge for its own sake. The pillars might in theory at some stage support a portico. But this was not the concern of the pillar builders.

To all intents and purposes, it was a beautiful classical-inspired unfinished building. Full of abstract cultural importance and intellectual value to be admired. Not so much a ruin, as a folly.


Tuesday, 5 August 2025

Linguistics in Year 7

 I have been meaning for a while to write a post on the idea of Linguistics in schools and its relation to language teaching. Or more precisely, to explore how encountering a new language brings our pupils up against important concepts in understanding what language is, and how it works. 

For many of our pupils, language and words are transparent, almost invisible tools. They use language without thinking about it, for the purposes of interaction and communication. The idea that language can be studied is new to them.

I have had pupils on trips to France say things like, "French people are so clever and so stupid. So clever because they can speak French. But so stupid because, well, why don't they just say it in English?" Or, "I know why you always get off the bus and go and talk to the people first. It's so you can tell them to all speak French because we're here now." For many of them, English is all they have ever known. And just as a fish doesn't realise it's in water until it's taken out, our pupils aren't aware of their own language until they start to learn another.

Here's a selected few of the typical encounters and lightbulb moments in Year 7 French. Some of these are inevitable milestones, some are interesting asides, and some are my own personal favourites...

Je m'appelle... Straightaway, it's the important realisation that French is a language, not a code. It's not just English transposed. So instead of saying "I am called...", you say, "I call myself..." In terms of remembering the exact grammar of reflexive verbs and radical changing verbs, it's too soon to exploit the verb s'appeler any further. But this is a big first realisation - that different languages say things in a different way.

J'ai un chien. This is continued when we get on to nouns. It's a huge and unnerving realisation that your pet isn't, in fact, a dog. "Dog" is an arbitrary name given to the animal in one language. But it's nothing to do with the essence of the beast. We have to correct the Primary SATs mantra that, "A noun is a person, place or thing" to "A noun is the word given to a person, place or thing." And we can point out that "nom" in French means name as well as noun. This is important when we come to gender, and the idea that "a noun has gender" means the word has gender, not the thing.

"Dog" is a nice example too, because we can comment on how chien is related to the word "canine" and German Hund is the same as "hound", but nobody knows where the word "dog" comes from. So not only are we introducing the idea that words are related, evolve, have origins, but also that there are people whose job it is to find this out. Or in the case of "dog", to fail to find this out. There is work still to be done, if the idea tempts them!

Then there's "guinea pig" (in French, Indian pig) which brings up the idea of needing new words for new things and not being totally sure what to call them or where they are from. Which we will meet again with potato and tomato and chocolate and avocado...

When we learn j'ai un chien qui s'appelle..., the pupils are focused on the nouns for pets. But I have to talk to them about the fact that for the next 5 years, pets are not exactly going to feature heavily. But the other words in that sentence are going to be important. J'ai is interesting because we look at how it's a contraction of je + ai. This seems directly analogous to I've = I have, and we may use this parallel to help teach it. But one of the differences is that I have and I've are both correct. Whereas in French,  je ai is incorrect. This can be a route into talking about how the spoken language is the "real" language. It is determined by what people actually say, through a process of evolution and survival of the most efficient. Whereas the written form is secondary and has arbitrary invented rules. 

This conversation may seem unnecessary, but it is inevitable sooner or later given the decayed nature of spoken French compared with the futile attempts of the guardians of the written form to maintain or even recreate archaic forms. If it doesn't come up  now, it will come up when we meet words like forêt. The French long ago (but post 1066?) stopped pronouncing the s in forest. And then also started to omit it in the written form. Until someone complained. And they decided as a compromise to invent the circumflex accent... 

So far, we have learned to say what our dog's name is. But we've learned about the nature of language, the divorce between names and the essence of things, the evolution and relationships between languages, the notion of arbitrary language and of self-declared authority as arbiter, and that all of this can be studied.

J'ai une petite souris blanche. Learning un chien and une souris, as we have seen, was an important step in learning that grammatical gender is not always the same as biological gender. In fact, it's the noun that has gender, not the thing. But what is gender for? The first thing to point out is that when pupils are "confused" by this, what they really mean is, "I'm not familiar with this concept because my language doesn't have it." So there are 2 things to do. One is to show what its role is in other languages. The other is to show why English can manage without. Because we already established that languages go through a natural process of evolution and efficiency. So features will always have a purpose.

We show how gender acts as glue to hold words in a sentence together. We can do this visually, but essentially, the feminine word souris is stuck to the words une and petite and blanche so that the sentence doesn't fall apart. If the words are the building blocks of the sentence, having some kind of glue is a natural thing to want to have. So why doesn't English have this glue? Because have English in we word very order rigid. And if we don't stick to that very rigid word order, the sentence doesn't work. It's as if in English we pile up our words very carefully so they don't all fall down. But we have to be very careful because glue have our doesn't language any. What's the example? A new red shiny fast French train? A fast new shiny French red train?

The house of Fraser. Possessives in French. You have to say la maison de ma tante. This is a great chance to introduce the idea that English = Latin + German(ic). So we can mention 1066 again. But also the idea that often English has 2 ways of saying something. So you can say cleverer or more clever. The first is the Germanic way. The second is the French way. You can say go in or enter. And Frasers Haus is literally the German for Fraser's House. Whereas in French, you have to say... House of Fraser.

In the previous examples, learning a language meant coming up against important ideas from linguistics. In this one, hopefully the knowledge of linguistics helps them remember to get the French right. But the idea that English comes from somewhere, is unsettling. What seemed natural, pure, the default, the original language of Hollywood, Shakespeare and the Bible, turns out to be a melting pot of language spaghetti. Or linguine.

If you want, you can take a diversion into how the letter s in English has taken on so many roles acquired from different languages. Possessives. Plurals. Third person verb endings. Contractions. No wonder it sometimes needs an apostrophe to help out and show how many Frasers the house belongs to. Quite an eye-opener after a diet of SPAG bollocknaise in Year 6. Sorry. Pasta jokes again! Quite a can of vermicelli.

Pain au chocolat. This one is a pain. And I don't mean the word "pain". Or the word "chocolat", although it's always good to throw in a little about Nahuatl. But I mainly save that for when they start Spanish. No. It's the au.

What do we do about a word meaning at the or to the in the expression glace à la fraise or pain au chocolat? It's time to come back to the nature of language. French has not evolved to be mapped onto English. (Which as we saw at the very beginning of the post, can be the default understanding of someone who hasn't yet studied a language.) Words in French do not correspond directly to words in English. (Even though the two languages are closely related.) French corresponds (I could qualify this but we've had enough brackets already) to the World. And English also corresponds to the World. So if there is such a thing as a potato, then French may well have a word for it. And English may well have a word for it. The link isn't language-to-language. It's language-to-thing-to-other-language. But what about less concrete words. Words that don't correspond to a real thing in the outside world? Words whose role is to connect other words in a sentence? This is where a visual depiction on the board with some arrows helps. But basically, if a word is entirely internal to the language, then it is not directly connected to things outside the language. So it can do what it likes. If French people want to say, "bread at the chocolate" and that sounds weird to us, then that's because it's a different language.

Which brings us back to where we started. Learning a different language inevitably means learning what things are different, what things are coincidentally the same, what things are linked or have evolved. It brings us up against the nature of language as something arbitrary and natural, but also with conventions and rules. We start to meet some examples of how languages can be structured in different ways, including things we weren't aware of in our own language. We come up against these ideas in very real ways, on a daily basis - I have only given 5 of the most basic here. And with this, comes the idea that language is something that people study: Linguistics.


I do have to say, I'm not a huge fan of suggesting that we need linguistics or literature or politics or history or culture or Culture or essays, in order for language learning to be considered valid. All of them are bound up in it and can be interesting. But it is a peculiarly British thing to consider that they are necessary for language learning to be valued. The worst manifestation of this is what happens post-16, where we have no mainstream language-learning pathway. Our obsession with intellectual heft has left us with only A Level, which is delightful for the happy few, but hardly a mainstream pursuit.