Craig Barton interviewed me recently, during which I discussed a series of lessons I planned and taught on solving simultaneous equations.
I could be wrong, but I think this was the best planning and teaching I ever did.
Several people have asked if I would share examples of what I described during the interview, so I’m adding that here. It’s a bit lengthy, but hopefully provides the detail many people were asking for, as well as some insight into how Siegfried Engelmann’s Theory of Instruction can be applied to the classroom.
I’m splitting the post into four parts:
- Specification of content
- Sequencing of content
- Pedagogy / Instructional Approach
- Limitations of Atomisation
This is Part 4 – Limitations of Atomisation
The Limits of Atomisation
I promised some commentary on the question of how far we should break concepts into a greater number of smaller and smaller pieces. I’m borrowing a word that Bruno Reddy suggested to me years ago to describe this: atomisation.
The benefits of atomisation are simple: it increases the probability of success for each child.
Picture one of the classes you teach. Now for each of those children, picture them with a probability hovering over them: for any given thing you might strive to teach them, selected at random, in any particular lesson, this is the probability that they will succeed in learning it.
Note: The way I’m using the word ‘learning‘ here, is incorrect, but speaks to our intuition about what’s happening in the classroom. Learning is a long-term effect resulting in changes in long-term memory. What I really mean here is the probability that pupils will respond successfully to predetermined questions, for that lesson only, which I would argue is a necessary but not sufficient condition for learning to eventually take place.
The way in which we teach can affect these probabilities, but the general distribution will likely remain – I suspect it is probably not true that ‘some methods work better for some children and less well for others,’ and there is a deeply sinister and insidious consequence of this line of thinking, as well, hinted at towards the end.
If, as Daniel Willingham says, we are more alike in how we learn than we are different, then changing the instructional method likely increases or decreases everyone’s probability of learning successfully, while more or less maintaining the distribution, the landscape of probabilities.
Important Note: Sometimes we think that ‘one way of teaching is better for some children than others’ because we switch to a different explanation, analogy, instructional method, and find that, when we do, a given child ‘finally gets it.’ We conclude that we had finally hit upon ‘the successful method’ for that particular child, but Willingham argues that it is more likely that the child simply needed more time, and trying different things gave them the time they needed to process the idea, or that they may have simply needed more examples / analogies, in other words, the eventual success was not caused by the particular example we gave them last, but by the cumulative effect of having given them three examples; whichever example we started with would still have resulted in failure. This is important because it’s this kind of reasoning that led us into the traps of ‘VAK learning styles’ and ‘left-brain / right-brain dominance,’ ideas that seek to categorise and therefore limit what we believe people to be capable of.
In this model, I am suggesting that atomisation raises the probability of each child being successful:
Suddenly, a class that seemed to have just a few ‘super smart’ kids in it now looks as though it has a whole bunch of them, with only a narrow gap in ‘ability’ for most.
This increased success likely results for several reasons that cognitive science can explain; I won’t go into them all here, but a simple one would involve the way atomisation helps us to avoid overloading Working Memory.
But there is no such thing as a free lunch.
The limitation: With increased atomisation, comes an increase in time needed to cover the content.
There were people in the class who previously had a very high chance of learning the content before atomisation, so what happens to them, do they lose out as more time is spent on the same topic?
What happens if we take this to an extreme, and break things down into the smallest components possible; what was once treated as a single idea, is now treated as a hundred – in this case is it possible that the time needed to cover everything in such minute detail would result in a diminished return?
…is not a simple one, but does exist.
As with most things in life, it’s a balancing act.
First, we’re generally so bad at this (speaking for myself,) and our standard textbooks tend to be equally bad at it – in other words, at the moment, atomising more will probably lead huge gains for most children, in most circumstances, so I would judge that there’s little risk in you striving to apply it.
Second, yes, it leads to an increase in time to teach initially, but it also results in the guaranteed initial apprehension of concepts that otherwise would have had a very low probability of being communicated successfully. This means the increase in time spent at the start is rewarded by increased probability of learning future content, thus spending time now, in order to save time in the future (a return on investment.)
Third, atomisation reveals concepts that are otherwise implicit, and overlooked in the curriculum. For example, we played with adding and subtracting three and four equations, and looked at adding equations without any intention of eliminating a variable, ideas that are often overlooked if the ‘process of solving simultaneous equations’ is simply taught from beginning to end. As a result, even the ‘higher attainers’ are learning more than they would otherwise (I spoke to this in the podcast.)
Finally, there is still a balance to be had. Too much of anything is bad, by definition, and too much atomisation is probably possible. I wonder whether the appropriate balance shifts from pupil to pupil, and this in turn leads me to wonder what role streams (a potentially better version of setting) and differentiating by time might play. Perhaps a top stream would experience less atomisation compared with a lower stream, but the lower stream would be gifted more time with their teacher to mitigate against the time cost.
The problem with traditional teaching
Engelmann has a way with words. Much as I find labelling unhelpful, what most people refer to as ‘progressive’ teaching, for him, is ‘traditional.’ To him, everything that has come before his ideas is ‘traditional.’
It is traditional, because in his mind the traditional position in education is that some kids can learn very well, and others can’t.
In stark contrast with this, Engelmann believes that if you get the teaching right, all children will be successful. The diagrams with the percentages hopefully speak to the image this conjures in my mind.
Consider the following three classes, with their respective probabilities of success in any given lesson:
Engelmann would argue that the traditional teacher sees three classes, with different pupils.
He sees three classes of the same pupils, with different teaching.
Pingback: Memory not memories – teaching for long term learning – primarytimerydotcom
Pingback: The world’s most effective learning experience | …to the real.
Pingback: If Engelmann taught swimming – Becky Allen's musings on education policy
Pingback: My best planning. Part 1 | …to the real.
Hi you have a very easy to follow site It was very easy to post it’s nice
Hi Kris…. I’ve listened to both Barton podcasts and read the 4 blog sections for the simultaneous equations example. The discussions resonate with what I’ve learned from Mighton’s JUMPMath.org, and from Davis’s Math Minds (structuringinquiry.com). MJ Moreau’s classroom experience ( https://www.youtube.com/watch?v=VmnT2iGOafQ ) of accelerated whole-class learning as the whole class moved towards mastery, suggests the ‘payback’ for taking the time up front is even greater than you describe.