5 Ways to Make Tasks More Challenging

Making tasks challenging is incredibly difficult. A lot of the time, we simply don’t know how well learners will understand our instruction when they have such varying levels of prior knowledge. 

We face a variety of issues in implementing challenge:
– How quickly some students disengage
– Anticipating failure and reducing the level of challenge
– Seeing learners struggle and then overscaffolding
– Not giving enough time 
– Relying on overly familiar strategies
– Anticipating where learners will be in a sequence of learning
– Hard to challenge pupils when you do not know them well

It’s also difficult to define challenge. Is it dictated by….
– the amount of time we give for the task?
– the amount of effort or cognitive demand we expect from the learner?
– the number of steps involved?
– the level of prior knowledge they have to access? 

So, challenge = difficult to define and difficult to design.

What can we do to introduce challenge then? 

  1. Productive struggle/productive failure – the jury is out on this with conflicting research into its efficacy. Is it better to have instruction followed by problem solving or problem solving followed by instruction? Productive struggle/failure argues the latter to be the case. In maths for example, the evidence seems to point towards I-PS for younger children with some benefit to PS-I for children of secondary age and up as shown here – https://twitter.com/Mr_AlmondED/status/1594271352340578307. The intention of productive struggle is to engage the learner with the task, starting to think about what the task is asking of them – task completion does not matter at this initial stage. 
  2. If we are to use productive struggle as a strategy, Dooley (2012) provides very useful advice in what to do next in the form of ‘consolidating tasks’. These are tasks that are similar in form/nature to the productive struggle task, provided later on in a learning sequence. The aim of such a task is to show the learner that even though they struggled initially, they can now complete the task and therefore have learned something/overcome challenge.  This lends credence to that counter-intuitive idea that demotivating pupils in the short-term is better for motivation in the long-term. 
  3. I mentioned earlier that one issue with challenge is that we anticipate learners struggling and reduce the level of challenge. Sullivan et al (2009) provide us with a solution to this. In the planning stage, we think of ‘enabling prompts’ – prompts that support the learner in attempting the task, without overscaffolding in the moment. A benefit is that the learner continues to attempt the task independently without over-relying on the teacher’s guidance. 
  4. Additionally, they suggest we think of ‘extending prompts’ – these are prompts that can extend the thinking of the learner when they have completed the task, rather than us creating a task in the moment that may not meet our instructional intentions.  The extending prompt is designed to extend the thinking of the learner on the same context as the original task, not create an entirely separate context altogether. 
  5. The final way we can consider challenge is taken from the NCETM. They talk of the FICT model:

    F – Familiarity
    I – Independence
    C – Complexity
    T – Technical Demand 

I’ve adapted this model into scales to demonstrate how each can affect the level of challenge. These are presented in pictures below: 




Technical Demand:

Nothing of what I have said in this blog is subject-specific, although all of the ideas presented above come from the mathematics community. I think they all have use outside of the maths classroom. If you have any thoughts on challenge, I’d love to hear them.

Starter tasks – are we using them badly?

We know that the brain actively seeks to tie new information to what it already knows (i.e. schemas, activation theory etc) and the role prior knowledge plays in this. Therefore, the intention of an introductory task or ‘starter’ *should* be to elicit prior knowledge. 

This intention may have been lost along the way as the three-part lesson (starter, main, plenary) became more dominant and starters were instead dictated by a set amount of time (i.e., having to be no longer than 5 minutes long). Subsequently, the ‘starter task’ became a vehicle for securing engagement or a tool for managing transition and lost sight of its primary function. We don’t want to fall into the same old trap of making a task engaging for engagement’s sake. 

However, being aware of the challenges that varying levels of prior knowledge can present (e.g. distraction, loss of interest), it is understandable why we may have begun to use starters as a chance to engage all learners. 

As with any task, we should always be asking ourselves, ‘what is it we want the learner to actually think about?’ 

A starter recall task such as “tell me everything you learnt last lesson about the Egyptians” is likely to be ineffective because it doesn’t provide appropriate retrieval cues. It will yield varied results and will not give an honest picture of what all students actually know. 

If we provide opportunity for limited thinking to occur, that is likely what we will get in return, so we should think carefully about how these retrieval questions are written. 

Near the start of a unit, or when recently learnt info is being recalled, the starter should contain sufficient retrieval cues to activate prior knowledge – e.g. Egyptians believed in afterlife. How did they prepare the dead for this? What did they do during these ceremonies? 

If we know the knowledge is very recently learnt, or it has not yet been recalled enough to be secure in the learner’s mind, then specific (and perhaps even leading) cues should be used. 

As that same knowledge is retrieved later on in a sequence of learning, we can move away from being specific to encourage more thinking on the part of the learner – e.g. what did Egyptians believe about the afterlife? 

If the function of a starter is to primarily activate prior knowledge (and secondarily engage the learner), then it must also present the opportunity to uncover a lack of prior knowledge or misconceptions. 

In order for that to happen, a starter should not only follow the path of simple recall, but require greater depth on behalf of the learner, so that the teacher can assess their current understanding more easily. Including misconceptions is one way to achieve this. 

Providing a task that demands greater depth of thinking can tell us a lot more and help to guide the rest of a lesson in the appropriate direction – “Were Ancient Greek and Egyptian beliefs similar?” 

Simple recall is an effective starter task, but it must be designed carefully so that it achieves the desired result. 

In summary:
– Is the starter task eliciting/activating prior knowledge?
– Is the starter task just for engagement?
– Does the starter task focus on simple recall?
– Does the starter task provide opportunity for deeper thinking? 

Where Do Classroom Tasks Fail? Part Three

Part one looked at the constructivist teaching fallacy and poor proxies for learning.

Part two looked at the twin sins of curriculum design and mathemathantic effects.

Part three will look at challenge-by-choice, anachronistic tasks and tasks that do not match their instructional intentions.


For those unfamiliar, challenge-by-choice refers to task-based differentiation, whereby the learner chooses which task they do from a selection (usually of three or more tasks). Seemingly, the most popular version of this is referred to as a ‘chilli challenge’, whereby the learner picks the difficulty of their task based on how ‘spicy’ it is.

Akin to fun tasks, this method may be utilised by teachers to secure the engagement of learners. However, challenge-by-choice presents issues. It often leads to learners not being challenged appropriately because they are not self-aware enough of their current level of understanding, resulting in them picking a task that is either too easy or too difficult. Notwithstanding, creating three or more different tasks creates a formative assessment nightmare for us as the teacher, making it increasingly difficult for us to provide instant feedback to the learner.

Anachronistic Tasks

Perhaps most commonly seen in history lessons, anachronistic tasks present another area where classroom tasks can fail. Throughout the design of a task, we must ask ourselves what it is that we want the learner to think about when they attempt it. Anachronistic tasks contradict this, presenting the opportunity for the learner to be distracted from what should be the focus. Inevitably, this can result in an ineffective tasks and learners’ knowledge not being secure.

In history, we are concerned with contemporaneous accounts from the time and this has led to tasks such as writing newspaper reports about the Roman invasion of Britain. The issue here is twofold:

  • First, that if the learner is focusing on the features of a newspaper report and how to write in a journalistic style, they are not thinking as deeply as we would like about the historical content itself. This was something OFSTED noticed in their inspection of history in outstanding primary schools, stating that there were often tasks which “distracted from the history content pupils needed to learn”.
  • Second, such anachronistic tasks could embed misconceptions that things existed outside of the time periods in which they were created (the first newspaper was believed to have been written in 1605 – well after the Romans conquered Britain).

NB: This is not to say anachronism is useless in the study of history. It can be used effectively. For example, the presence of an anachronistic object in a historical photo could tell us a source is not reliable.

Tasks not matching the instructional intention

This links to my first guiding principle of task design, which will be the topic of a future blog. I have also hinted at this issue in a recent blog on KWL grids as an assessment tool, but indulge me as I make a similar point using another common task.

‘Look, say, cover, write, check’ is a common task used to promote spelling in primary schools. It is done in a table format like the one displayed below:

The instructional intention is for pupils to remember the common grapheme (letters representing a sound) used to represent a phoneme (the sound) in a group of spellings – e.g. /ā(r)/ made by air in fair, hair, chair etc.

However, ‘look, say, cover, write, check’ does not get pupils to think about the common graphemes that can be used to make a phoneme. Instead, it gets the learner to focus on the word as an entire unit, rather than breaking it down into parts. It does not get the learner to consider the grapheme-phoneme correspondence. It also gets them to store a word in working memory while it is hidden from view and then write the word onto a piece of paper in front of them. This presents us as teachers with the illusion that a learner is able to spell all the words correctly, but doesn’t tell us if they have understood the learning behind the spellings themselves. In other words, it demonstrates to us that they can store something in short-term memory, but not that they have retrieved knowledge from long-term memory (unless, of course, the child does already know how to spell the word).

This is just one of many tasks that fail to match our instructional intention. This idea will be explored further in the next blog in this series.

Why KWL Grids Are Not Fit For Purpose

If you are not familiar with KWL grids, let me explain. They are an assessment tool of three stages. What the learner already Knows (K), what the learner Wants to know (W) and then finally what the learner has Learnt (L).

So, they usually look something like this:

Teachers give them to pupils at the start of a unit of learning (e.g. Ancient Egypt) and pupils would fill in the first column. However, there is no retrieval cue for the learner, just the empty column as you saw above.

So, as a means of finding out prior knowledge and gaps in learning between students, this column is extremely limited in its use. We would be better placed as teachers to ask questions that link specifically to our curriculum:

e.g. “What do you know about the use of the River Nile in Ancient Egypt?”

This allows learners to retrieve specific knowledge related to what they will learn, enabling them to potentially see connections between other units, such as rivers studied in geography or other history units.

The middle column is often wasted time. It gets learners to write down what they would like to know about. This leads to learners writing questions about things you won’t cover (as they’re not relevant) or oddly specific questions you likely do not have the subject knowledge to answer.

The final column suffers the same issue as the first. There is no retrieval cue for learners to respond to. They are met with an empty column and expected to dump all the knowledge learnt into it. This, inevitably, leads to learners not writing down all that they truly remember.

Often, the L part of the grid is completed by students flicking back through books. The issue here is that it does not require the learner to retrieve from long-term memory. The learner is just storing content in working memory momentarily, while they copy it across to the grid.

Consider which is more effective:

– Write down everything you have learnt about Ancient Egypt.

– Tell me what you know about the Ancient Egyptian belief of the ‘afterlife’.

The former is likely to elicit some factual knowledge with perhaps no depth or thought given to connections between the facts – at least for the majority of pupils. The latter requires the learner to think harder, to think of specific facts and then consider the relation between them.

The latter is by no means a perfect assessment question, but serves the purpose of assessment far better than an empty L column. Ideally, a series of questions similar to the ‘afterlife’ one are given – perhaps even facilitating links between it and prior knowledge:

e.g. We learnt about the Norse belief of the afterlife when we studied the Vikings and Anglo-Saxons in Year 3. What similarities and differences do you see in their beliefs and the beliefs of the Ancient Egyptians on the afterlife?

I used KWL grids myself. It is only through using them for a while that I discovered their inadequacy. I fell for the illusion that it was an engaging task because I was using the middle column to engage learners and to let them take control of what they learnt.

But assessment is essential. Essential to teaching, essential to curriculum and essential to sequencing learning over time. We do ourselves and the learners we teach a disservice if we don’t assess as accurately as we possibly can.

I do not claim any one type of assessment is the *best* in foundation subjects. However, there are many that serve the purpose more successfully than KWL grids (such as retrieval quizzes, multiple choice Qs, essays and short paragraphs in response to Qs).

Where Do Classroom Tasks Fail? Part Two

This is a blog in a series on task design. The others can be found here.

Part one looked at the constructivist teaching fallacy and poor proxies for learning. This part will look at the twin sins of curriculum design and mathemathantic effects.

The Twin Sins of Curriculum Design

Wiggins and McTighe posit that curriculum design (and therefore indirectly task design) often falls victim to these twin sins:

  1. Activity-focused teaching

“Here, teachers plan and conduct various activities, worrying only about whether they are engaging and kid-friendly.” – Wiggins and McTighe

Activity-focused teaching results in tasks that have been designed to secure engagement, often at the expense of linking appropriately to what has been taught or, more generally, curriculum goals. Consequently, these tasks are often designed in isolation, separate from the necessary sequencing of learning throughout a unit or curriculum. Tasks designed within an activity-focused framework struggle to meet the intended instructional purpose and are therefore redundant in any assessment of learning the teacher seeks to pursue. A common example recognised from English primary schools would be ‘Biscuit Stonehenge’. After learning about Stonehenge, pupils are provided biscuits to create a model of Stonehenge. The task has been designed to secure pupil engagement, but holds little-to-no educational value past that.

Example of Biscuit Stonehenge. Source.

NB: There is absolutely nothing wrong with designing tasks that are fun. Learners, especially young children, should build an enthusiasm towards learning through fun tasks when appropriate. Such fun tasks are very common at the end of learning units and understandably so. However, when fun tasks do not align with curriculum intentions, they are unlikely to build memory and should not be used *if* this is the primary aim. As Wiggins and McTighe dictate, “such activities are like cotton candy – pleasant enough in the moment, but lacking long-term substance”. As alluded to in part one with both Coe and Mayer’s thinking, we must not misconstrue engagement with learning.

  • Coverage-based teaching

Coverage-based teaching refers to covering large amounts of curriculum content at speed and at the expense of any depth of understanding for the learner.

It therefore results in tasks that only allow the learner to create a shallow understanding of knowledge and prevents the building of automaticity or fluency, as not enough time is devoted to building this up through tasks of regular practice. Coverage-based teaching flies in the face of everything we know about how memory is established and maintained over time (e.g. spacing effect, retrieval practice). By rushing through content with superficial and shallow tasks, we operate under the illusion that pupils have learnt it simply because it has been ‘covered’.

Mathemathantic Effects

Clark (1989) argues that poorly designed tasks can exacerbate ‘mathemathantic effects’ (manthanein = learning + Thanatos = death).

Clarke states that, “Whenever an instructional treatment encourages students to replace an existing, effective learning strategy with a dissimilar alternative, learning is depressed.”

Mathemathantic effects can occur when certain areas of learning go through a substitution: learning strategies, motivational goals and student control.

I have taken the examples Clarke produces and made them specific to task design below:

Examples of mathemathantic effects on learning strategies:

  • Learners have little prior knowledge but task assumes the learner has automated strategies, knowledge and skills available
  • Learners have much prior knowledge but task requires them to use strategies which interfere with their automated strategies, knowledge and skills

Examples of mathemathantic effects on motivational goals:

  • Learners are afraid of failing but tasks provide minimal guidance or structure
  • Learners want to achieve success but are given a task that is highly structured and provides too much support and guidance

Example of mathemathantic effects on student control:

  • Learners need a lot of support and guidance but are made to do tasks that are open-ended and ask a lot of them
  • Learners need little support and guidance but are made to do tasks that are highly structured and controlled

The third part will look at challenge-by-choice, anachronistic tasks and when tasks fail to match instructional intentions.

Where Do Classroom Tasks Fail? Part One

This blog is part of a series on task design. The previous blogs can be found here.

It seems obvious that to design tasks effectively, we need to know what can make tasks ineffective. By knowing these pitfalls, we can circumvent them and consequently design more effective tasks.

I defined Constructivism in a previous blog, as believing that a learner ‘constructs’ their own understanding. Constructivism therefore supposes that tasks should facilitate the opportunity to generate such an understanding for the learner. This has led to exploratory learning in the classroom, such as through inquiry-based learning. The belief being that the learner should build the knowledge themselves and therefore need to discover the knowledge in order to do so.

In critique of this theory, Mayer (2004) offers up the ‘Constructivist Teaching Fallacy’, whereby teachers may believe that a learner being ‘cognitively active’ “translates into a constructivist theory of teaching in which the learner is behaviourally active” also.  

Mayer has depicted this through a 2×2 grid below:

This grid outlines that a Constructivist view of teaching believes learning only occurs, or is at the very least most effective, when the bottom right quadrant is satisfied. Learners have to be behaviourally (interpreted to mean physically) active for pupils to be able to construct knowledge within their minds. We of course know this to be untrue from our daily practice, where learners sit at desks for lengthy periods and still learn quite effectively.

Poor Proxies

When learners are engaging independently with the learning task, we can observe certain behaviours that lead us to believe the task is working effectively. Here we can refer to Rob Coe’s (2014) ‘Poor Proxies for Learning’:

  • Students are busy: lots of work is done (especially written work)
  • Students are engaged, interested, motivated
  • Students are getting attention: feedback, explanations
  • Classroom is ordered, calm, under control (or noisy)
  • Curriculum has been ‘covered’
  • Students have supplied correct answers (even if they have not really understood them, cannot reproduce them independently, will have forgotten them soon, already knew it)
  • Task completion (especially quickly)

*The emboldened parts are my own thinking around poor proxies.

The poor proxies above create an illusion for us as teachers – they lead us to believe learning is happening, when of course we know that learning is invisible (Didau, 2015) and can take place across a series of lessons, and not necessarily in just a single lesson. We must be conscious of these proxies as teachers, and as leaders observing tasks in lessons, as it can mislead our assessment of pupils’ learning. If we believe in these poor proxies, then ineffective tasks mask themselves as effective.

Part two will look at the twin sins of curriculum design and mathemathantic effects.


Coe (2014) – What Makes Great Teaching?

Didau (2015) – Slides from London Festival of Education.

Mayer (2004) – Should There Be a Three-Strikes Rule Against Pure Discovery Learning?

Task Design Series

  1. What is Task Design and why is it important?
  2. What is the purpose of a learning task?
  3. How can a teacher’s view of learning influence the tasks they design?
  4. Planning lessons backwards
  5. Why not plan forwards?
  6. Designing Tasks to Support Long-Term Memory
  7. Where Do Classroom Tasks Fail? Part One
  8. Where Do Classroom Tasks Fail? Part Two
  9. Where Do Classroom Tasks Fail? Part Three
  10. Why KWL Grids Are Not Fit For Purpose
  11. Starter Tasks – Are We Using Them Badly
  12. 5 Ways to Make Tasks More Challenging

Designing Tasks to Support Long-Term Memory

This is blog 6 in a series on Task Design. The other blogs can be found here – Task Design Series.

“Learning is defined as an alteration in long-term memory. If nothing has altered in long-term memory, nothing has been learned.” – Sweller (2011)

This definition of learning as a change in long-term memory (LTM) has become common parlance over the past few years. If we are to take Sweller’s comments as the accepted truth, we must consider how tasks are designed to facilitate the building of LTM.

In order to do that, we have to look at LTM with greater precision. LTM is often divided into two types: declarative memory and procedural (non-declarative) memory.

Declarative memory is characterised as ‘knowing what’ – it is the storage of facts and events. For example, knowing that WW2 lasted from 1939-1945. Forming this type of memory can be rapid, with possibly even just one instance of attending to knowledge being enough. As Ullman (2004) intimates, declarative memory “is important for the very rapid learning of arbitrarily-related information – that is, for the associative binding of information”.

Declarative memory is based on recall and retrieval, because of this, it is also known as ‘explicit’ memory, as we can consciously remember it and recall it. Declarative memory is said to have ‘representational flexibility’ – that is, it can be recalled independent of the circumstances in which it was learnt.

Declarative memory is also believed to have the property of compositionality (Cohen et al, 1997) – the ability to represent the whole and its constituent parts simultaneously – e.g. democracy as people having power, but also as elections, voting, government, representation etc. Cohen et al believe it is this compositionality that allows us to manipulate representations and bind information in our heads, therefore, declarative memory is “a fundamentally relational representation system supporting memory for the relationships among perceptually distinct objects”.

In contrast, procedural memory is characterised as ‘knowing how’ – it is the storage of how to do things. For example, performing the steps of long division. Procedural learning aids the performance of a task without conscious involvement and that is why it is also referred to as ‘implicit’ or ‘non-declarative’ memory, as we cannot always articulate these memories, which are formed from habit.

It is also called implicit memory because previous experience in performing a task helps you to perform a task better, without conscious or explicit awareness of it. Forming this type of memory happens through slow, incremental learning – as such, one instance is not deemed enough for good performance of the procedure (in contrast to declarative memory). The ability to perform the procedure develops from experience-based tuning, where random or conscious adjustments build your ability to perform the procedure.

Koziol and Budding (2009) summarise the two types of LTM here:

“Declarative learning and memory lends itself to explicit, conscious recollection. Procedural learning and memory are implicit; the actual learning is inferred from an individual’s improvement in performing the task.”

So, we believe that learning is when long-term memory is altered, and that there are two types of long-term memory: declarative and procedural. It would be fitting therefore to consider that there are two types of task also: declarative and procedural tasks.

Declarative tasks seek to build memory around facts and events.

Procedural tasks seek to build memory around skills and procedures.

These two types of tasks are not a dichotomy, but actually closely intertwined. Serving a ball in tennis is a procedural act, but a pupil must first learn the declarative knowledge required to perform the serve (i.e., the height to throw the ball, position of the feet, where to strike the ball on the racquet etc). As Daniel Willingham (2009) posits, “Factual knowledge must precede skill”.

What are the takeaways if we are to pursue these two types of tasks?

Declarative memory tasks:

  • Design tasks to enable the learner to bind information together
  • Design tasks to facilitate spreading activation in the learner’s brain
  • Revisit declarative knowledge in a variety of tasks to facilitate representational flexibility
  • Consider task dependency – how one task builds or relies on tasks that have preceded it

Procedural memory tasks:

  • Design tasks that allow for identical procedural practice until the procedure is learnt


Cohen, Neal J.; Poldrack, Russell A.; Eichenbaum, Howard  (1997). Memory for Items and Memory for Relations in the Procedural/Declarative Memory Framework. Memory, 5(1-2), 131–178.        

Koziol, L. F., & Budding, D. E. (2009). Subcortical structures and cognition: Implications for neuropsychological assessment. New York: Springer.

Ullman, M. T. (2004) Contributions of memory circuits to language: The declarative/procedural model.

Willingham, D. (2009) Why Don’t Students Like School?

Leading Teacher Development

Teacher development, and the leadership of it, is a hot topic at the moment.

It is therefore worth pausing to ask ourselves, ‘what is teacher development’? And ‘how should we lead it’?

Teacher Development

The NPQ Framework for Leading Teacher Development states that teacher development, “is likely to involve a lasting change in teachers’ capabilities or understanding”. This is an agreeable definition, but why should we change teacher understanding?

Josh Goodrich puts forth a ‘change sequence’ that shows the knock-on effect that can occur through improving teachers’ understanding.

Teacher knowledge >>> teacher action >>> student knowledge >>> student action

So, improving teacher knowledge can impact on student outcomes, but do we know this to be actually true? Well, yes.

  • Expert teachers can help pupils to learn up to 4x faster (Wiliam, 2016)
  • More experienced teachers help pupils to achieve more than their novice peers (Kraft and Papay, 2014)*
  • The difference between an expert teacher and a ‘bad’ teacher could be as high as whole year’s learning (Sutton Trust, 2011).

*NB: experience does not equate to expertise.

Expert teachers appear to have a noticeable impact on student outcomes. Therefore, the goal of teacher development should be not only to have a ‘lasting change’ on capability and understanding, but to also support teachers in the journey from novice to expert.

What is an expert teacher?

Again, hard to define. It is an interplay between talent and expertise with the scales heavily tipped in expertise’s favour. So, what does the literature say makes a teacher an expert?

There are many characteristics posited in the literature, yet three recur frequently:

  1. A knowledge bank built up over thousands of hours
  2. The ability to respond to situations based off their familiarity
  3. A degree of automaticity

Consequently, teacher development should focus on developing these three characteristics within every teacher. How do we do that?

Glaser (1993) talks of expertise as a ‘change in agency over time’ in three stages.

  • Stage 1 – ‘externally supported’

Here, the teacher is a novice. They require a highly structured teacher development programme, highly specific coaching, plenty of deliberate practice and short, regular feedback cycles. (The Early Career Framework will serve to better support novice teachers during this stage.)

  • Stage 2 – ‘Transitional’

The teacher has now gained some experience in the classroom and is starting to gradually build their bank of knowledge, familiarity of situations and their automaticity of response.

They require the same support as in stage 1 but the level of which should be reduced to align with their growing capability and understanding.

  • Stage 3 – ‘Self-regulatory’

Now, the teacher is an expert. They can regulate themselves and take greater ownership of their professional development.

These three stages are complemented by what we can infer from the Expertise Reversal Effect (Kalyuga, 2007).

‘It takes 10 years to become an expert’ is often banded around, yet there is no set amount of time that this takes – certainly not that any research has measured or possibly could measure. What we do know, however, is that teacher development can speed up the journey from novice to expert.

How should we lead teacher development?

There are certain conditions that every leader of teacher development should consider: culture, bias, priorities, expectations, systems, to name but a few. Effective teacher development can still occur if one of these conditions is ignored, but it is more likely to be effective when they are all considered in conjunction.

The science of learning has taken the educational world by storm in recent years. It is important to recognise that this should apply to teaching teachers and not just teaching children. Everything we have learnt about cognitive load, working memory and the like, should factor into any course of teacher development.

As such, teacher development should have its own curriculum. It should be sequenced well, build on prior knowledge, and allow for plenty of practice. We should then move away from the traditional staff meeting model picture on the left below, and move towards the model on the right that Matt Swain and Lloyd Williams-Jones recommend:

It would be foolish of me to think that I could present something better than what the EEF have come up with their ‘Effective Professional Development’ document. They have outlined 4 groupings, each with individual mechanisms. These have been condensed down from decades of research and allow us to design courses of effective professional development more easily.

To end, what does leading teacher development look like in practice?

In a previous blog, I wrote about using the EEF mechanisms to implement a behaviour for learning strategy. You can find that blog here.