School Development – Don’t Upset the Ecosystem

I’m not a fan of education metaphors but in the spirit of true hypocrisy, here goes.

In an ecosystem, there are some species that act a keystone. Their role is like a keystone in an arch – if you remove the keystone, the entire arch collapses.

A famous example is the elimination of the Gray Wolf from the Yellowstone National Park. Without these predators, animals began to over-graze, affecting the populations of plants. Wolves also kept animals away from the beaver habitats. Without the wolves, their habitats were also over-grazed. So, the beaver population decreased due to a lack of food. Without beaver dams to slow down water, erosion of river banks occurred. This displaced soil and plants.

When we cut a school initiative, it has the potential to have this same knock-on effect.

Imagine: a school has decided their phonics scheme is not having the desired impact. They decide to change to a different scheme. Because of this change, there is a myriad of effects:

  • External providers train staff in how to deliver it. This takes up development time originally planned for other things and requires significant time and money.
  • There are new monitoring practices and assessments, which take up a lot of time.
  • Leaders adapt the timetable to meet the new scheme’s requirements, affecting the timetabling of other subjects.
  • The new scheme requires children to be split into more groups. This takes up more intervention space and affects timetabling of other interventions.
  • The increased number of groups requires more staff to lead them, taking staff away from other responsibilities.
  • Funding is allocated for new books aligned to the scheme, leaving less funding available elsewhere.
  • Sounds now appear in a different order, which affects the learning of those who have already received some phonics instruction, requiring the order of the scheme to be adapted during its initial implementation.
  • Results do not improve in the first two years of the scheme’s implementation because the changes are widespread and not embedded.

You may be thinking this sounds a bit extreme, but it is a real example I observed. One change can have multiple, unforeseen effects.

This is where the principle of Chesterton’s fence resonates:

‘The principle that reforms should not be made until the reasoning behind the existing state of affairs is understood’.

In schools, we are quick to enact the changes we want to see. Unfortunately, these changes can be often be driven by ideology rather than evidence, by lethal mutations, or by a sense of universalism (i.e. seeing it work in another school and thinking it will therefore work in our contexts). This can cause us to act without spending the appropriate time to consider the effects of the change.

Thinking about this caused me to reflect on my own practice in leadership.

When I was a novice leader, I always thought about what needed to change and what I was changing it to.

As I became more experienced, I instead thought about why something needed to change and whether it really needed to change, or just needed to be tweaked.

When making changes in schools, let’s think about the impact on the ecosystem.

Advertisement

5 Ways to Make Tasks More Challenging

Making tasks challenging is incredibly difficult. A lot of the time, we simply don’t know how well learners will understand our instruction when they have such varying levels of prior knowledge. 

We face a variety of issues in implementing challenge:
– How quickly some students disengage
– Anticipating failure and reducing the level of challenge
– Seeing learners struggle and then overscaffolding
– Not giving enough time 
– Relying on overly familiar strategies
– Anticipating where learners will be in a sequence of learning
– Hard to challenge pupils when you do not know them well

It’s also difficult to define challenge. Is it dictated by….
– the amount of time we give for the task?
– the amount of effort or cognitive demand we expect from the learner?
– the number of steps involved?
– the level of prior knowledge they have to access? 

So, challenge = difficult to define and difficult to design.

What can we do to introduce challenge then? 

  1. Productive struggle/productive failure – the jury is out on this with conflicting research into its efficacy. Is it better to have instruction followed by problem solving or problem solving followed by instruction? Productive struggle/failure argues the latter to be the case. In maths for example, the evidence seems to point towards I-PS for younger children with some benefit to PS-I for children of secondary age and up as shown here – https://twitter.com/Mr_AlmondED/status/1594271352340578307. The intention of productive struggle is to engage the learner with the task, starting to think about what the task is asking of them – task completion does not matter at this initial stage. 
  2. If we are to use productive struggle as a strategy, Dooley (2012) provides very useful advice in what to do next in the form of ‘consolidating tasks’. These are tasks that are similar in form/nature to the productive struggle task, provided later on in a learning sequence. The aim of such a task is to show the learner that even though they struggled initially, they can now complete the task and therefore have learned something/overcome challenge.  This lends credence to that counter-intuitive idea that demotivating pupils in the short-term is better for motivation in the long-term. 
  3. I mentioned earlier that one issue with challenge is that we anticipate learners struggling and reduce the level of challenge. Sullivan et al (2009) provide us with a solution to this. In the planning stage, we think of ‘enabling prompts’ – prompts that support the learner in attempting the task, without overscaffolding in the moment. A benefit is that the learner continues to attempt the task independently without over-relying on the teacher’s guidance. 
  4. Additionally, they suggest we think of ‘extending prompts’ – these are prompts that can extend the thinking of the learner when they have completed the task, rather than us creating a task in the moment that may not meet our instructional intentions.  The extending prompt is designed to extend the thinking of the learner on the same context as the original task, not create an entirely separate context altogether. 
  5. The final way we can consider challenge is taken from the NCETM. They talk of the FICT model:

    F – Familiarity
    I – Independence
    C – Complexity
    T – Technical Demand 

I’ve adapted this model into scales to demonstrate how each can affect the level of challenge. These are presented in pictures below: 

Familiarity:

Independence:

Complexity:

Technical Demand:

Nothing of what I have said in this blog is subject-specific, although all of the ideas presented above come from the mathematics community. I think they all have use outside of the maths classroom. If you have any thoughts on challenge, I’d love to hear them.

Starter tasks – are we using them badly?

We know that the brain actively seeks to tie new information to what it already knows (i.e. schemas, activation theory etc) and the role prior knowledge plays in this. Therefore, the intention of an introductory task or ‘starter’ *should* be to elicit prior knowledge. 

This intention may have been lost along the way as the three-part lesson (starter, main, plenary) became more dominant and starters were instead dictated by a set amount of time (i.e., having to be no longer than 5 minutes long). Subsequently, the ‘starter task’ became a vehicle for securing engagement or a tool for managing transition and lost sight of its primary function. We don’t want to fall into the same old trap of making a task engaging for engagement’s sake. 

However, being aware of the challenges that varying levels of prior knowledge can present (e.g. distraction, loss of interest), it is understandable why we may have begun to use starters as a chance to engage all learners. 

As with any task, we should always be asking ourselves, ‘what is it we want the learner to actually think about?’ 

A starter recall task such as “tell me everything you learnt last lesson about the Egyptians” is likely to be ineffective because it doesn’t provide appropriate retrieval cues. It will yield varied results and will not give an honest picture of what all students actually know. 

If we provide opportunity for limited thinking to occur, that is likely what we will get in return, so we should think carefully about how these retrieval questions are written. 

Near the start of a unit, or when recently learnt info is being recalled, the starter should contain sufficient retrieval cues to activate prior knowledge – e.g. Egyptians believed in afterlife. How did they prepare the dead for this? What did they do during these ceremonies? 

If we know the knowledge is very recently learnt, or it has not yet been recalled enough to be secure in the learner’s mind, then specific (and perhaps even leading) cues should be used. 

As that same knowledge is retrieved later on in a sequence of learning, we can move away from being specific to encourage more thinking on the part of the learner – e.g. what did Egyptians believe about the afterlife? 

If the function of a starter is to primarily activate prior knowledge (and secondarily engage the learner), then it must also present the opportunity to uncover a lack of prior knowledge or misconceptions. 

In order for that to happen, a starter should not only follow the path of simple recall, but require greater depth on behalf of the learner, so that the teacher can assess their current understanding more easily. Including misconceptions is one way to achieve this. 

Providing a task that demands greater depth of thinking can tell us a lot more and help to guide the rest of a lesson in the appropriate direction – “Were Ancient Greek and Egyptian beliefs similar?” 

Simple recall is an effective starter task, but it must be designed carefully so that it achieves the desired result. 

In summary:
– Is the starter task eliciting/activating prior knowledge?
– Is the starter task just for engagement?
– Does the starter task focus on simple recall?
– Does the starter task provide opportunity for deeper thinking? 

Where Do Classroom Tasks Fail? Part Three

Part one looked at the constructivist teaching fallacy and poor proxies for learning.

Part two looked at the twin sins of curriculum design and mathemathantic effects.

Part three will look at challenge-by-choice, anachronistic tasks and tasks that do not match their instructional intentions.

Challenge-by-choice

For those unfamiliar, challenge-by-choice refers to task-based differentiation, whereby the learner chooses which task they do from a selection (usually of three or more tasks). Seemingly, the most popular version of this is referred to as a ‘chilli challenge’, whereby the learner picks the difficulty of their task based on how ‘spicy’ it is.

Akin to fun tasks, this method may be utilised by teachers to secure the engagement of learners. However, challenge-by-choice presents issues. It often leads to learners not being challenged appropriately because they are not self-aware enough of their current level of understanding, resulting in them picking a task that is either too easy or too difficult. Notwithstanding, creating three or more different tasks creates a formative assessment nightmare for us as the teacher, making it increasingly difficult for us to provide instant feedback to the learner.

Anachronistic Tasks

Perhaps most commonly seen in history lessons, anachronistic tasks present another area where classroom tasks can fail. Throughout the design of a task, we must ask ourselves what it is that we want the learner to think about when they attempt it. Anachronistic tasks contradict this, presenting the opportunity for the learner to be distracted from what should be the focus. Inevitably, this can result in an ineffective tasks and learners’ knowledge not being secure.

In history, we are concerned with contemporaneous accounts from the time and this has led to tasks such as writing newspaper reports about the Roman invasion of Britain. The issue here is twofold:

  • First, that if the learner is focusing on the features of a newspaper report and how to write in a journalistic style, they are not thinking as deeply as we would like about the historical content itself. This was something OFSTED noticed in their inspection of history in outstanding primary schools, stating that there were often tasks which “distracted from the history content pupils needed to learn”.
  • Second, such anachronistic tasks could embed misconceptions that things existed outside of the time periods in which they were created (the first newspaper was believed to have been written in 1605 – well after the Romans conquered Britain).

NB: This is not to say anachronism is useless in the study of history. It can be used effectively. For example, the presence of an anachronistic object in a historical photo could tell us a source is not reliable.

Tasks not matching the instructional intention

This links to my first guiding principle of task design, which will be the topic of a future blog. I have also hinted at this issue in a recent blog on KWL grids as an assessment tool, but indulge me as I make a similar point using another common task.

‘Look, say, cover, write, check’ is a common task used to promote spelling in primary schools. It is done in a table format like the one displayed below:

The instructional intention is for pupils to remember the common grapheme (letters representing a sound) used to represent a phoneme (the sound) in a group of spellings – e.g. /ā(r)/ made by air in fair, hair, chair etc.

However, ‘look, say, cover, write, check’ does not get pupils to think about the common graphemes that can be used to make a phoneme. Instead, it gets the learner to focus on the word as an entire unit, rather than breaking it down into parts. It does not get the learner to consider the grapheme-phoneme correspondence. It also gets them to store a word in working memory while it is hidden from view and then write the word onto a piece of paper in front of them. This presents us as teachers with the illusion that a learner is able to spell all the words correctly, but doesn’t tell us if they have understood the learning behind the spellings themselves. In other words, it demonstrates to us that they can store something in short-term memory, but not that they have retrieved knowledge from long-term memory (unless, of course, the child does already know how to spell the word).

This is just one of many tasks that fail to match our instructional intention. This idea will be explored further in the next blog in this series.

Why KWL Grids Are Not Fit For Purpose

If you are not familiar with KWL grids, let me explain. They are an assessment tool of three stages. What the learner already Knows (K), what the learner Wants to know (W) and then finally what the learner has Learnt (L).

So, they usually look something like this:

Teachers give them to pupils at the start of a unit of learning (e.g. Ancient Egypt) and pupils would fill in the first column. However, there is no retrieval cue for the learner, just the empty column as you saw above.

So, as a means of finding out prior knowledge and gaps in learning between students, this column is extremely limited in its use. We would be better placed as teachers to ask questions that link specifically to our curriculum:

e.g. “What do you know about the use of the River Nile in Ancient Egypt?”

This allows learners to retrieve specific knowledge related to what they will learn, enabling them to potentially see connections between other units, such as rivers studied in geography or other history units.

The middle column is often wasted time. It gets learners to write down what they would like to know about. This leads to learners writing questions about things you won’t cover (as they’re not relevant) or oddly specific questions you likely do not have the subject knowledge to answer.

The final column suffers the same issue as the first. There is no retrieval cue for learners to respond to. They are met with an empty column and expected to dump all the knowledge learnt into it. This, inevitably, leads to learners not writing down all that they truly remember.

Often, the L part of the grid is completed by students flicking back through books. The issue here is that it does not require the learner to retrieve from long-term memory. The learner is just storing content in working memory momentarily, while they copy it across to the grid.

Consider which is more effective:

– Write down everything you have learnt about Ancient Egypt.

– Tell me what you know about the Ancient Egyptian belief of the ‘afterlife’.

The former is likely to elicit some factual knowledge with perhaps no depth or thought given to connections between the facts – at least for the majority of pupils. The latter requires the learner to think harder, to think of specific facts and then consider the relation between them.

The latter is by no means a perfect assessment question, but serves the purpose of assessment far better than an empty L column. Ideally, a series of questions similar to the ‘afterlife’ one are given – perhaps even facilitating links between it and prior knowledge:

e.g. We learnt about the Norse belief of the afterlife when we studied the Vikings and Anglo-Saxons in Year 3. What similarities and differences do you see in their beliefs and the beliefs of the Ancient Egyptians on the afterlife?

I used KWL grids myself. It is only through using them for a while that I discovered their inadequacy. I fell for the illusion that it was an engaging task because I was using the middle column to engage learners and to let them take control of what they learnt.

But assessment is essential. Essential to teaching, essential to curriculum and essential to sequencing learning over time. We do ourselves and the learners we teach a disservice if we don’t assess as accurately as we possibly can.

I do not claim any one type of assessment is the *best* in foundation subjects. However, there are many that serve the purpose more successfully than KWL grids (such as retrieval quizzes, multiple choice Qs, essays and short paragraphs in response to Qs).

Where Do Classroom Tasks Fail? Part Two

This is a blog in a series on task design. The others can be found here.

Part one looked at the constructivist teaching fallacy and poor proxies for learning. This part will look at the twin sins of curriculum design and mathemathantic effects.

The Twin Sins of Curriculum Design

Wiggins and McTighe posit that curriculum design (and therefore indirectly task design) often falls victim to these twin sins:

  1. Activity-focused teaching

“Here, teachers plan and conduct various activities, worrying only about whether they are engaging and kid-friendly.” – Wiggins and McTighe

Activity-focused teaching results in tasks that have been designed to secure engagement, often at the expense of linking appropriately to what has been taught or, more generally, curriculum goals. Consequently, these tasks are often designed in isolation, separate from the necessary sequencing of learning throughout a unit or curriculum. Tasks designed within an activity-focused framework struggle to meet the intended instructional purpose and are therefore redundant in any assessment of learning the teacher seeks to pursue. A common example recognised from English primary schools would be ‘Biscuit Stonehenge’. After learning about Stonehenge, pupils are provided biscuits to create a model of Stonehenge. The task has been designed to secure pupil engagement, but holds little-to-no educational value past that.

Example of Biscuit Stonehenge. Source.

NB: There is absolutely nothing wrong with designing tasks that are fun. Learners, especially young children, should build an enthusiasm towards learning through fun tasks when appropriate. Such fun tasks are very common at the end of learning units and understandably so. However, when fun tasks do not align with curriculum intentions, they are unlikely to build memory and should not be used *if* this is the primary aim. As Wiggins and McTighe dictate, “such activities are like cotton candy – pleasant enough in the moment, but lacking long-term substance”. As alluded to in part one with both Coe and Mayer’s thinking, we must not misconstrue engagement with learning.

  • Coverage-based teaching

Coverage-based teaching refers to covering large amounts of curriculum content at speed and at the expense of any depth of understanding for the learner.

It therefore results in tasks that only allow the learner to create a shallow understanding of knowledge and prevents the building of automaticity or fluency, as not enough time is devoted to building this up through tasks of regular practice. Coverage-based teaching flies in the face of everything we know about how memory is established and maintained over time (e.g. spacing effect, retrieval practice). By rushing through content with superficial and shallow tasks, we operate under the illusion that pupils have learnt it simply because it has been ‘covered’.

Mathemathantic Effects

Clark (1989) argues that poorly designed tasks can exacerbate ‘mathemathantic effects’ (manthanein = learning + Thanatos = death).

Clarke states that, “Whenever an instructional treatment encourages students to replace an existing, effective learning strategy with a dissimilar alternative, learning is depressed.”

Mathemathantic effects can occur when certain areas of learning go through a substitution: learning strategies, motivational goals and student control.

I have taken the examples Clarke produces and made them specific to task design below:

Examples of mathemathantic effects on learning strategies:

  • Learners have little prior knowledge but task assumes the learner has automated strategies, knowledge and skills available
  • Learners have much prior knowledge but task requires them to use strategies which interfere with their automated strategies, knowledge and skills

Examples of mathemathantic effects on motivational goals:

  • Learners are afraid of failing but tasks provide minimal guidance or structure
  • Learners want to achieve success but are given a task that is highly structured and provides too much support and guidance

Example of mathemathantic effects on student control:

  • Learners need a lot of support and guidance but are made to do tasks that are open-ended and ask a lot of them
  • Learners need little support and guidance but are made to do tasks that are highly structured and controlled

The third part will look at challenge-by-choice, anachronistic tasks and when tasks fail to match instructional intentions.

Where Do Classroom Tasks Fail? Part One

This blog is part of a series on task design. The previous blogs can be found here.

It seems obvious that to design tasks effectively, we need to know what can make tasks ineffective. By knowing these pitfalls, we can circumvent them and consequently design more effective tasks.

I defined Constructivism in a previous blog, as believing that a learner ‘constructs’ their own understanding. Constructivism therefore supposes that tasks should facilitate the opportunity to generate such an understanding for the learner. This has led to exploratory learning in the classroom, such as through inquiry-based learning. The belief being that the learner should build the knowledge themselves and therefore need to discover the knowledge in order to do so.

In critique of this theory, Mayer (2004) offers up the ‘Constructivist Teaching Fallacy’, whereby teachers may believe that a learner being ‘cognitively active’ “translates into a constructivist theory of teaching in which the learner is behaviourally active” also.  

Mayer has depicted this through a 2×2 grid below:

This grid outlines that a Constructivist view of teaching believes learning only occurs, or is at the very least most effective, when the bottom right quadrant is satisfied. Learners have to be behaviourally (interpreted to mean physically) active for pupils to be able to construct knowledge within their minds. We of course know this to be untrue from our daily practice, where learners sit at desks for lengthy periods and still learn quite effectively.

Poor Proxies

When learners are engaging independently with the learning task, we can observe certain behaviours that lead us to believe the task is working effectively. Here we can refer to Rob Coe’s (2014) ‘Poor Proxies for Learning’:

  • Students are busy: lots of work is done (especially written work)
  • Students are engaged, interested, motivated
  • Students are getting attention: feedback, explanations
  • Classroom is ordered, calm, under control (or noisy)
  • Curriculum has been ‘covered’
  • Students have supplied correct answers (even if they have not really understood them, cannot reproduce them independently, will have forgotten them soon, already knew it)
  • Task completion (especially quickly)

*The emboldened parts are my own thinking around poor proxies.

The poor proxies above create an illusion for us as teachers – they lead us to believe learning is happening, when of course we know that learning is invisible (Didau, 2015) and can take place across a series of lessons, and not necessarily in just a single lesson. We must be conscious of these proxies as teachers, and as leaders observing tasks in lessons, as it can mislead our assessment of pupils’ learning. If we believe in these poor proxies, then ineffective tasks mask themselves as effective.

Part two will look at the twin sins of curriculum design and mathemathantic effects.

References:

Coe (2014) – What Makes Great Teaching?

Didau (2015) – Slides from London Festival of Education.

Mayer (2004) – Should There Be a Three-Strikes Rule Against Pure Discovery Learning?

Task Design Series

  1. What is Task Design and why is it important?
  2. What is the purpose of a learning task?
  3. How can a teacher’s view of learning influence the tasks they design?
  4. Planning lessons backwards
  5. Why not plan forwards?
  6. Designing Tasks to Support Long-Term Memory
  7. Where Do Classroom Tasks Fail? Part One
  8. Where Do Classroom Tasks Fail? Part Two
  9. Where Do Classroom Tasks Fail? Part Three
  10. Why KWL Grids Are Not Fit For Purpose
  11. Starter Tasks – Are We Using Them Badly
  12. 5 Ways to Make Tasks More Challenging

Designing Tasks to Support Long-Term Memory

This is blog 6 in a series on Task Design. The other blogs can be found here – Task Design Series.

“Learning is defined as an alteration in long-term memory. If nothing has altered in long-term memory, nothing has been learned.” – Sweller (2011)

This definition of learning as a change in long-term memory (LTM) has become common parlance over the past few years. If we are to take Sweller’s comments as the accepted truth, we must consider how tasks are designed to facilitate the building of LTM.

In order to do that, we have to look at LTM with greater precision. LTM is often divided into two types: declarative memory and procedural (non-declarative) memory.

Declarative memory is characterised as ‘knowing what’ – it is the storage of facts and events. For example, knowing that WW2 lasted from 1939-1945. Forming this type of memory can be rapid, with possibly even just one instance of attending to knowledge being enough. As Ullman (2004) intimates, declarative memory “is important for the very rapid learning of arbitrarily-related information – that is, for the associative binding of information”.

Declarative memory is based on recall and retrieval, because of this, it is also known as ‘explicit’ memory, as we can consciously remember it and recall it. Declarative memory is said to have ‘representational flexibility’ – that is, it can be recalled independent of the circumstances in which it was learnt.

Declarative memory is also believed to have the property of compositionality (Cohen et al, 1997) – the ability to represent the whole and its constituent parts simultaneously – e.g. democracy as people having power, but also as elections, voting, government, representation etc. Cohen et al believe it is this compositionality that allows us to manipulate representations and bind information in our heads, therefore, declarative memory is “a fundamentally relational representation system supporting memory for the relationships among perceptually distinct objects”.

In contrast, procedural memory is characterised as ‘knowing how’ – it is the storage of how to do things. For example, performing the steps of long division. Procedural learning aids the performance of a task without conscious involvement and that is why it is also referred to as ‘implicit’ or ‘non-declarative’ memory, as we cannot always articulate these memories, which are formed from habit.

It is also called implicit memory because previous experience in performing a task helps you to perform a task better, without conscious or explicit awareness of it. Forming this type of memory happens through slow, incremental learning – as such, one instance is not deemed enough for good performance of the procedure (in contrast to declarative memory). The ability to perform the procedure develops from experience-based tuning, where random or conscious adjustments build your ability to perform the procedure.

Koziol and Budding (2009) summarise the two types of LTM here:

“Declarative learning and memory lends itself to explicit, conscious recollection. Procedural learning and memory are implicit; the actual learning is inferred from an individual’s improvement in performing the task.”

So, we believe that learning is when long-term memory is altered, and that there are two types of long-term memory: declarative and procedural. It would be fitting therefore to consider that there are two types of task also: declarative and procedural tasks.

Declarative tasks seek to build memory around facts and events.

Procedural tasks seek to build memory around skills and procedures.

These two types of tasks are not a dichotomy, but actually closely intertwined. Serving a ball in tennis is a procedural act, but a pupil must first learn the declarative knowledge required to perform the serve (i.e., the height to throw the ball, position of the feet, where to strike the ball on the racquet etc). As Daniel Willingham (2009) posits, “Factual knowledge must precede skill”.

What are the takeaways if we are to pursue these two types of tasks?

Declarative memory tasks:

  • Design tasks to enable the learner to bind information together
  • Design tasks to facilitate spreading activation in the learner’s brain
  • Revisit declarative knowledge in a variety of tasks to facilitate representational flexibility
  • Consider task dependency – how one task builds or relies on tasks that have preceded it

Procedural memory tasks:

  • Design tasks that allow for identical procedural practice until the procedure is learnt

References:

Cohen, Neal J.; Poldrack, Russell A.; Eichenbaum, Howard  (1997). Memory for Items and Memory for Relations in the Procedural/Declarative Memory Framework. Memory, 5(1-2), 131–178.        

Koziol, L. F., & Budding, D. E. (2009). Subcortical structures and cognition: Implications for neuropsychological assessment. New York: Springer.

Ullman, M. T. (2004) Contributions of memory circuits to language: The declarative/procedural model.

Willingham, D. (2009) Why Don’t Students Like School?