TOT009:

Teacher Ollie’s Takeaways is a weekly post bringing together al of the fascinating things that Ollie read throughout the week! Find all past posts of Teacher Ollie’s Takeaways here

Astrophysicists and feminism

A great post, prompted by a meme shared for International Womens’ Day, on how young women aspiring to be astrophysicists is great, but os is little girls aspiring to be princesses…

What makes a good PD?

Turns out that almost all professional development for teachers fails, that is, it doesn’t have any measurable impact on student learning (great citations for this in this article). In the face of this, should we give up on PD all together? In this article @HfFletcherWood tells us some of the keys to good PD.

PISA and Technology in the Classroom

20 good youtube channels for Maths Teachers

The back and forth on explicit instruction

If you want to hear leaders in their field engaging in the constructivism vs. explicit instruction debate, the articles linked to in the comments of this article are a fantastic place to start. I’m working my way through them at the moment.

The performance of partially selective schools in England

Do partially selective schools improve results for students? Here’s a moderate scale study suggesting partially selective schools maybe don’t have such beneficial effects for those who attend…

Philosophy For Children. Effective or not?

Philosophy for Children is a program that aims to teach students how to think philosophically, and to improve oracy skills, and communication more broadly. Here’s a study attesting to its efficacy, see replies to this tweet for an alternative view…

The Mighty, A website highlighting the writing of Mighty People

Eloquent argument against the same old ‘new education’ assumptions

Tom Bennett argues agains a new film that rips on our educational system. Film states all the usual ‘stifles creativity’, ‘rote learning’, tropes. Great reply from Tom Bennett.

What to do when your child stares at another child with a disability?

Great post from Daniel Willingham. Hot tip, ensure it’s a social interaction. Follow the link for more.

Trump’s policies in perspective

Just because…

TOT008:

Find all other episodes of Teacher Ollie’s Takeaways here, find it on iTunes here, or on your favourite podcasting app by searching ‘Teacher Ollie’s Takeaways’. You may also like to check out Ollie’s other podcast, the Education Research Reading Room, here

Not a podcast this week, just a few notes on key takeaways : )

Seminal Papers in Educational Psychology.

Check them out!

Guide your teaching by setting questions that you want the students to be able to answer.

Birmo tweets about the new ‘My Induction’ app.

It’s pretty interesting, got some decent tips, and some good starting points for new teachers.

Collection of evidence on direct instruction.

This is gold! E.g., I knew I’d read somewhere in the last PISA report that inquiry learning was negatively associated with science outcomes, spent about 15 mins trying to re-find last week, then gave up. Low and behold, it’s right here!!!

Further dissecting Growth Mindset.

This has been a hot topic on Twitter recently. Here’s a collation of posts, well worth a look.

More evidence for Explicit Instruction in Maths

Effectiveness of Explicit and Constructivist Mathematics Instruction for Low-Achieving Students in the Netherlands

A must listen podcast!

I love the Mr. Barton Podcast, and this week was an absolute ripper. I can’t think of a better use of 2 hours of a teacher’s time than to listen to this!

How deep can a simple maths question take us?

A really simple maths questions, with some amazing results!

Here’s a sneak peek

Screen Shot 2017-03-09 at 8.55.05 pm

Screen Shot 2017-03-09 at 8.55.14 pm

Source: https://blogs.adelaide.edu.au/maths-learning/2016/04/12/quarter-the-cross/

Just for Fun. Pie Graphs in Action!!!

Want to see an elegant example of scaffolding?

How to help students to move from concrete examples to generalisations. This is a short and sweet classroom snapshot of how to do this incredibly effectively.

‘When will I ever use this?’: The ultimate comeback!!!

Thanks for joining me for another week with Teacher Ollie’s Takeaways : )

O.

TOT005: Why constructivism doesn’t work, evolution and cognition, the reliability of classroom observations, routines, and a classroom story

Find all other episodes of Teacher Ollie’s Takeaways here, find it on iTunes here, or on your favourite podcasting app by searching ‘Teacher Ollie’s Takeaways’. You may also like to check out Ollie’s other podcast, the Education Research Reading Room, here

Show Notes

Why minimal guidance during instruction doesn’t work

Ref: Kirschner, P. A., Sweller, J., & Clark, R. E. (2006). Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching. Educational Psychologist, 41(2), 75–86.

The arguments for and against minimally guided instruction

  • Assertion:

    The most recent version of instruction with minimal guidance comes from constructivism (e.g., Steffe & Gale, 1995), which appears to have been derived from observations that knowledge is constructed by learners and so (a) they need to have the opportunity to construct by being presented with goals and minimal information, and (b) learning is idiosyncratic and so a common instructional format or strategies are ineffective.

  • Response:

    “The constructivist description of learning is accurate, but the instructional consequences suggested by constructivists do not necessarily follow.”

Learners have to construct a mental schema of the information in the end, that’s what we’re trying to furnish them with, and it turns out, the less of a schema we give them (as with minimal guidance) the less complete of a schema they end up with. Essentially, if we give them the full picture, it will better help them to construct the full picture!

  • Assertion:

    Another consequence of attempts to implement constructivist theory is a shift of emphasis away from teaching a discipline as a body of knowledge toward an exclusive emphasis on learning a discipline by experiencing the processes and procedures of the discipline (Handelsman et. al., 2004; Hodson, 1988). This change in focus was accompanied by an assumption shared by many leading educators and discipline specialists that knowledge can best be learned or only learned through experience that is based primarily on the procedures of the discipline. This point of view led to a commitment by educators to extensive practical or project work, and the rejection of instruction based on the facts, laws, principles and theories that make up a discipline’s content accompanied by the use of discovery and inquiry methods of instruction.

  • Response:

    …it may be a fundamental error to assume that the pedagogic content of the learning experience is identical to the methods and processes (i.e., the epistemology) of the discipline being studied and a mistake to assume that instruction should exclusively focus on methods and processes. (see Shulman (1986; Shulman & Hutchings, 1999)).

This gets to the heart of the distinction between experts and novices. Experts and novices simply don’t learn the same way. They don’t have the same background knowledge at their disposal. By teaching novices in the way that experts should be taught we’re really doing them a disservice, overloading working memories, and simply being ineffective teachers.

Drilling down to the evidence:

None of the preceding arguments and theorizing would be important if there was a clear body of research using controlled experiments indicating that unguided or minimally guided instruction was more effective than guided instruction.. Mayer (2004) recently reviewed evidence from studies conducted from 1950 to the late 1980s comparing pure discovery learning, defined as unguided, problem-based instruction, with guided forms of instruction. He suggested that in each decade since the mid-1950s, when empirical studies provided solid evidence that the then popular unguided approach did not work, a similar approach popped up under a different name with the cycle then repeating itself. Each new set of advocates for unguided approaches seemed either unaware of or uninterested in previous evidence that unguided approaches had not been validated. This pattern produced discovery learning, which gave way to experiential learning, which gave way to probem-based and inquiry learning, which now gives way to constructivist instructional techniques. Mayer (2004) concluded that the “debate about discovery has been replayed many times in education but each time, the evidence has favored a guided approach to learning” (p. 18).

Current Research Supporting Direct Guidance

List is too long, here are some excerpts

Aulls (2002), who observed a number of teachers as they implemented constructivist activities…He described the “scaffolding” that the most effective teachers introduced when students failed to make learning progress in a discovery set- ting. He reported that the teacher whose students achieved all of their learning goals spent a great deal of time in instructional interactions with students.

Stronger evidence from well-designed, controlled experi- mental studies also supports direct instructional guidance (e.g., see Moreno, 2004; Tuovinen & Sweller, 1999).

Klahr and Nigam (2004) tested transfer following discovery learning, found that those relatively few students who learned via discovery ‘showed no signs of superior quality of learning’.

Re-visiting Sweller’s ‘Story of a Research Program. 

From last week: Goal free effect, worked example effect, split attention effect.

My post from this week on trying out the goal free effect in my classroom.

See full paper here.

David Geary provided the relevant theoretical constructs (Geary, 2012). He described two categories of knowledge: biologically primary knowledge that we have evolved to acquire and so learn effortlessly and unconsciously and biologically secondary knowledge that we need for cultural reasons. Examples of primary knowledge are learning to listen and speak a first language while virtually everything learned in educational institutions provides an example of secondary knowledge. We invented schools in order to provide biologically secondary knowledge. (pg. 11)

For many years our field had been faced with arguments along the following lines. Look at the ease with which people learn outside of class and the difficulty they have learning in class. They can accomplish objectively complex tasks such as learning to listen and speak, to recognise faces, or to interact with each other, with consummate ease. In contrast, look at how relatively difficult it is for students to learn to read and write, learn mathematics or learn any of the other subjects taught in class. The key, the argument went, was to make learning in class more similar to learning outside of class. If we made learning in class similar to learning outside of class, it would be just as natural and easy.

How might we model learning in class on learning outside of class? The argument was obvious. We should allow learners to discover knowledge for themselves without explicit teaching. We should not present information to learners – it was called “knowledge transmission” – because that is an unnatural, perhaps impossible, way of learning. We cannot transmit knowledge to learners because they have to construct it themselves. All we can do is organize the conditions that will facilitate knowledge construction and then leave it to students to construct their version of reality themselves. The argument was plausible and swept the education world.

The argument had one flaw. It was impossible to develop a body of empirical literature supporting it using properly constructed, randomized, controlled trials

The worked example effect demonstrated clearly that showing learners how to do something was far better than having them work it out themselves. Of course, with the advantage of hindsight provided by Geary’s distinction between biologically primary and secondary knowledge, it is obvious where the problem lies. The difference in ease of learning between class-based and non-class-based topics had nothing to do with differences in how they were taught and everything to do with differences in the nature of the topics.

If class-based topics really could be learned as easily as non-class-based topics, we would never have bothered including them in a curriculum since they would be learned perfectly well without ever being mentioned in educational institutions. If children are not explicitly taught to read and write in school, most of them will not learn to read and write. In contrast, they will learn to listen and speak without ever going to school.

Re-visit Heather Hill.

I asked: Dylan William quotes you and says ‘Heather Hill’s – http://hvrd.me/TtXcYh – work at Harvard suggested that a teacher would need to be observed teaching 5 different classes, with every observation made by made by 6 independent observers to reduce chance to really be able to reliable judge a teacher.’

Heather replied.

Thanks for your question about how many observations are necessary. It really depends upon the purpose for use.

1. If the use is teacher professional development. I wouldn’t worry too much about score reliability if the observations are used for informal/growth purposes. It’s much more valuable to have teachers and observers actually processing the instruction they are seeing, and then talking about it, than to be spending their time worrying about the “right” score for a lesson.

That principle is actually the basis for our own coaching program, which we built around our observation instrument (the MQI):

http://mqicoaching.cepr.harvard.edu

The goal is to have teachers learn the MQI (though any instrument would do), then analyze their own instruction vis-a-vis the MQI, and plan for improvement by using the upper MQI score points as targets. So for instance, if a teacher concludes that she is a “low” for student engagement, she then plans with her coach how to become a “mid” on this item. The coach serves as a therapist of sorts, giving teachers tools, cheering her on, and making sure she stays on course rather than telling the teacher exactly what to do. During this process, we’re not actually too concerned that either the teacher (or even coach) scores correctly; we do want folks to be noticing what we notice, however, about instruction. A granular distinction, but one that makes coaching much easier.

2. If the use is for formal evaluation. Here, score reliability matters much more, especially if there’s going to be consequential decisions made based on teacher scores. You don’t want to be wrong about promoting a teacher or selecting a coach based on excellent classroom instruction. For my own instrument, it originally looked like we needed 4 observations each scored by 2 raters (see a paper I wrote with Matt Kraft and Charalambos Charalambous in Educational Researcher) to get reliable scores. However, my colleague Andrew Ho and colleagues came up with the 6 observations/5 observer estimates from the Measures of Effective Teaching data:

http://k12education.gatesfoundation.org/wp-content/uploads/2015/12/MET_Reliability-of-Classroom-Observations_Research-Paper.pdf

And looking at our own reliabliity data from recent uses of the MQI, I tend to believe his estimate more than our own. I’d also add that better score reliability can probably be achieved if a “community of practice” is doing the scoring — folks who have taken the instrument and adapted it slightly to their own ideas and needs. It’s a bet that I have, but not one that I’ve tested (other than informally).

The actual MQI instrument itself and its training is here:

http://isites.harvard.edu/icb/icb.do?keyword=mqi_training

We’re always happy to answer questions, either about the instrument, scoring, or the coaching.

Best,
Heather

Routines.

Post from Gary Jones: Do you work in a ‘stupid’ school on functional stupidity and how smart people end up doing silly things that result in all sorts of bad outcomes, one of which is poor instruction for students.

Here are two of the 7 routines that the post highlighted for avoiding functional stupidity (originally from ALVESSON, M. & SPICER, A. 2016. The stupidity paradox: The power and pitfalls of functional stupidity at work.).

Newcomers find ways of taking advantage of the perspective of new members of staff and their ‘beginners mind.’  Ask them: What seems strange or confusing? What’s different? What could be done differently?

Pre-mortems – work out why a project ‘failed’ before you even start the project.  See for http://evidencebasededucationalleadership.blogspot.com/2016/11/the-school-research-lead-premortems-and.html more details

 

From the classroom…

TOT #004. John Sweller’s Cognitive Load Theory, Using Question Stems, and What does Ed in Australia Need?

Find all other episodes of Teacher Ollie’s Takeaways here, find it on iTunes here, or on your favourite podcasting app by searching ‘Teacher Ollie’s Takeaways’. You may also like to check out Ollie’s other podcast, the Education Research Reading Room, here

Show Notes

Cognitive Load Theory, John Sweller.

Wiliam then posted a link to Sweller’s article entitled ‘Story of a Research Program‘. The following excerpts are from that article.

It starts off biographically,

I was born in 1946 in Poland to parents who, apart from my older sister, were their families’ sole survivors of the Holocaust.

With touches of dry humour…

At school, I began as a mediocre student who slowly deteriorated to the status of a very poor student by the time I arrived at the University of Adelaide…. 

Initially, I enrolled in an undergraduate dentistry course but never managed to advance beyond the first year. While I am sure that was a relief to the Dental Faculty, it also should be a relief to Australian dental patients.

Given the physical proximity of the teeth and brain, I decided next to try my luck at psychology. It was a good choice because my grades immediately shot up from appalling back to mediocre, where they had been earlier in my academic career. I decided I wanted to be an academic.

Sweller eventually ended up at UNSW. Then he details the seminal experiment. 

After several non-descript experiments, I saw some results that I thought might be important. I, along with research students Bob Mawer and Wally Howe, was running an experiment on problem solving, testing undergraduate students (Sweller, Mawer, & Howe, 1982). The problems required students to transform a given number into a goal number where the only two moves allowed were multiplying by 3 or subtracting 29.

Each problem had only one possible solution and that solution required an alternation of multiplying by 3 and subtracting 29 a specific number of times. For example, a given and goal number might require a 2-step solution requiring a single sequence of: x 3, – 29 to transform the given number into the goal number. Other, more difficult problems would require the same sequence consisting of the same two steps repeated a variable number of times.

My undergraduates found these problems relatively easy to solve with very few failures, but there was something strange about their solutions. While all problems had to be solved by this alternation sequence very few students discovered the rule, that is, the solution sequence of alternating the two possible moves. Whatever the problem solvers were doing to solve the problems, learning the alternating solution sequence rule did not play a part.

Cognitive load theory probably can be traced back to that experiment.

But this was an isolated case. Sweller needed to demonstrate it in an educational context. Research was taken to the fields of maths and physics education, and it did indeed show the effect. I’ll talk briefly about  some of the Cognitive Load Effects in education, and we’ll save some more for the next two or three episodes of TOT. 

The Goal Free Effect: 

If working memory during problem solving was overloaded by attempts to reach the problem goal thus preventing learning, then eliminating the problem goal might allow working memory resources to be directed to learning useful move combinations rather than searching for a goal. Problem solvers could not reduce the distance between their current problem state and the goal using means-ends analysis if they did not have a specific goal state. Rather than asking learners to “find Angle X” in a geometry problem, it might be better to ask them to “find the value of as many angles as possible”.

A couple of other effects are worth noting, these are the worked example effect, the split-attention effect.

Using Question Stems in the Classroom

Jennifer Gonzalez’s ‘Is Your Classroom Academically Safe?’

Gonzalez’s question stems to scaffold student questioning:

  • This is what I do understand… (summarize up to the point of misunderstanding)
  • Can you tell me if I’ve got this right? (paraphrasing current understanding)
  • Can you please show another example?
  • Could you explain that one more time?
  • Is it ______ or _________? (identifying a point of confusion between two possibilities)

I said:

  • What is ___ in the diagram
  • Am I right in thinking that ___
  • What’s the difference between ___ and ___

Would love more suggestions.

What Would it Take to Fix Education in Australia?

Full article here, but I’ll just talk briefly about two comments made in question time.

Larissa made an interesting point on the role of literacy. Following up on a question from Maxine McKew on the inclusion of Australian literature in Australian schools, she suggested that the literature studied in schools must represent the diversity of our Australian society. If we don’t do this then we’re effectively saying to vast swathes of our society ‘You do not have a place here’.

Glenn: There’s a misalignment between the locus of policy making and the locus of accountability in Australia. We’ve increasingly got federal bodies making decisions that have implications for education right across the country (locus of policy making), whereas the accountability to the impacts of these decisions actually falls not at the federal level but at the state levels. Fundamentally this is a broken feedback loop (my terminology) that undermines improvements and accountability right throughout the system.

Several times whilst I was listening to this very high level discussion on education a quote came to mind that I heard a couple of years ago,  ‘If you change what happens in your classroom, you are changing the education system.’

TOT #003. A student reflects on learning strategies, Edu podcasts for kids, computers vs. paper for note taking, and the rise of randomised control trials.

Find all other episodes of Teacher Ollie’s Takeaways here, find it on iTunes here, or on your favourite podcasting app by searching ‘Teacher Ollie’s Takeaways’. You may also like to check out Ollie’s other podcast, the Education Research Reading Room, here

Show Notes

A Student tries out effective learning strategies

Original Author,Syeda Nizami

The Strategies: Spaced Practice, Retrieval Practice, Elaboration, Interleaving, Concrete Examples, Dual Coding

“Overall, each of the six strategies had their strengths and weaknesses, and it somewhat depends on which method is preferable to you, but I think the two that are truly essential are retrieval practice and spacing. Retrieval practice was and is my preferred way of studying for a quiz or exam, but this experience made me realize how truly useful it is. To be perfectly honest, spacing was a strategy I had never tried out before, even though teachers had always stressed that cramming wasn’t effective.”

Edu Podcasts for Kids (or for inspiration!)

The Show about Science: This science interview show is hosted by 6-year-old Nate, and while it has some serious science chops, it’s also just plain adorable. Nate talks to scientists about everything from alligators to radiation to vultures, in his distinctly original interviewing style.

Episode on Ants! Nate’s first interview : ) 

Are laptops and tablets a help or a hindrance to note taking?

The Impact of Computer Usage on Academic Performance: Evidence from a Randomized Trial at the United States Military Academy (Carter, Greenberg and Walker, 2016)

We present findings from a study that prohibited computer devices in randomly selected classrooms of an introductory economics course at the United States Military Academy. Average final exam scores among students assigned to classrooms that allowed computers were 18 percent of a standard deviation lower than exam scores of students in classrooms that prohibited computers. Through the use of two separate treatment arms, we uncover evidence that this negative effect occurs in classrooms where laptops and tablets are permitted without restriction and in classrooms where students are only permitted to use tablets that must remain flat on the desk surface.

One of the highlights of my day at researchED Amsterdam was hearing Paul Kirschner speak about edu-myths. He began his presentation by forbidding the use of laptops or mobile phones, explaining  that taking notes electronically leads to poorer recall than handwritten notes. The benefits of handwritten over typed notes include better immediate recall as well as improved retention after 2 weeks. In addition, students who take handwritten notes are more like to remember facts but also to have better future understanding of the topic. Fascinatingly, it doesn’t even matter whether you ever look at these notes – the simple act of making them appears to be beneficial.

The rise of Randomised Controlled Trials

Original article by Robert Slavin, told us about reciprocal teaching effects in TOT001.

A nice quote to end on

 

TOT #002. Teaching ‘The Scientific Method’, Growth Mindset Hoax? Instructional Techniques, Class Sizes, and Addressing Visible Disadvantage

Find all other episodes of Teacher Ollie’s Takeaways here, find it on iTunes here, or on your favourite podcasting app by searching ‘Teacher Ollie’s Takeaways’. You may also like to check out Ollie’s other podcast, the Education Research Reading Room, here

Show Notes

Teaching ‘the scientific method’

Superb post from @mfordhamhistory, on how we can teach students the discipline through a curriculum of case studies: https://t.co/Akgpv6D3NT

— Harry Fletcher-Wood (@HFletcherWood) January 10, 2017

Original post by Michael Fordham

‘1. Disciplines are characterised as much by their internal differences as their similarities.

2. There is no Platonic ideal of each discipline

3. Generalised models of disciplines rarely reflect what happens on the ground

All of these points lead me to great scepticism about curriculum theories in history, science or other disciplines that work by distilling the ‘essence’ from those disciplines, and teaching those. I am not all convinced that we can teach children ‘the scientific method’ in a general sense before they have learnt a number of cases of scientific research in practice.

History teachers have produced numerous examples of this over the last few years. Steve Mastin, for example, designed a scheme of work in which he taught his pupils how one historian (Eamon Duffy) had worked with a particular body of source material to answer questions about the impact of the reformation in England. Rachel Foster has a similarly well-cited example where she designed a scheme of work around the way two different historians (Goldhagen and Browning) had interpreted the same source material (a report from a police battalion involved in the Holocaust) in quite different ways. In examples such as these, children are taught about a specific example of where historians have undertaken research. Over time, as pupils learn more and more cases of disciplinary practice, we can then teach them the similarities and differences between different approaches: we thus end with abstract ideas, rather than beginning with them.

This means that I would suggest the following as an alternative way of teaching disciplinary practice to school children. Rather than distil some general, abstract ideas about ‘how the discipline works’, we would be better off specifying a range of specific cases of disciplinary practice for children to learn, from which we can as teachers tease out the similarities and differences in approach that characterise our respective disciplines.’

Is Growth Mindset a Hoax?

Original article by Tom Chivers, about hype of Growth mindset, being able to do everything from help struggling students to bring peace to the middle east.

‘Scott Alexander, the pseudonymous psychiatrist behind the blog Slate Star Codex, described Dweck’s findings as “really weird”, saying “either something is really wrong here, or [the growth mindset intervention] produces the strongest effects in all of psychology”.
He asks: “Is growth mindset the one concept in psychology which throws up gigantic effect sizes … Or did Carol Dweck really, honest-to-goodness, make a pact with the Devil in which she offered her eternal soul in exchange for spectacular study results?”

Strongest evidence from Timothy Bates’ research…

‘Bates told BuzzFeed News that he has been trying to replicate Dweck’s findings in that key mindset study for several years. “We’re running a third study in China now,” he said. “With 200 12-year-olds. And the results are just null.

“People with a growth mindset don’t cope any better with failure. If we give them the mindset intervention, it doesn’t make them behave better. Kids with the growth mindset aren’t getting better grades, either before or after our intervention study.”

Dweck told BuzzFeed News that attempts to replicate can fail because the scientists haven’t created the right conditions. “Not anyone can do a replication,” she said. “We put so much thought into creating an environment; we spend hours and days on each question, on creating a context in which the phenomenon could plausibly emerge.’

Reply by Scott Alexander. http://slatestarcodex.com/2017/01/14/should-buzzfeed-publish-information-which-is-explosive-if-true-but-not-completely-verified/

‘it mentions a psychologist Timothy Bates who has tried to replicate Dweck’s experiments (at least) twice, and failed. This is the strongest evidence the article presents. But I don’t think any of Bates’ failed replications have been published – or at least I couldn’t find them. Yet hundreds of studies that successfully demonstrate growth mindset have been published. Just as a million studies of a fake phenomenon will produce a few positive results, so a million replications of a real phenomenon will produce a few negative results. We have to look at the entire field and see the balance of negative and positive results. The last time I tried to do this, the only thing I could find was this meta-analysis of 113 studies which found a positive effect for growth mindset and relatively little publication bias in the field.’

‘I guess my concern is this: the Buzzfeed article sounds really convincing. But I could write an equally convincing article, with exactly the same structure, refuting eg global warming science. I would start by talking about how global warming is really hyped in the media (true!), that people are making various ridiculous claims about it (true!), interview a few scientists who doubt it (98% of climatologists believing it means 2% don’t), and cite two or three studies that fail to find it (98% of studies supporting it means 2% don’t). Then I would point out slight statistical irregularities in some of the key global warming papers, because every paper has slight statistical irregularities. Then I would talk about the replication crisis a lot.’

‘Again, this isn’t to say I believe in growth mindset. I recently talked to a totally different professor who said he’d tried and failed to replicate some of the original growth mindset work (again, not yet published). But we should do this the right way and not let our intuitions leap ahead of the facts.

I worry that one day there’s going to be some weird effect that actually is a bizarre miracle. Studies will confirm it again and again. And if we’re not careful, we’ll just say “Yeah, but replication crisis, also I heard a rumor that somebody failed to confirm it,” and then forget about it. And then we’ll miss our chance to bring peace to the Middle East just by doing a simple experimental manipulation on the Prime Minister of Israel.’

Using private school instructional techniques in a public school

Greg Ashman pointed me to an article by Joe Kirby on how public schools can adopt some of the practices that high achieving private schools implement, without the massive cost barriers.

e.g., ‘Teaching writing is heavily guided, even up to sixth form. In History, for instance, starting point sentences are shared for each paragraph of complex essays on new material. Extensive written guidance is shared with pupils. Sub-questions within each paragraph and numerous facts are also shared.’

Does class size matter?

Original article by Valerie Strauss
(read whole article)

How do visible disadvantage impact student outcomes?

Original post by Megan Smith.

Asking students to raise their hand to signal their achievement (when they knew an answer) highlights differences in performance between students, making it more visible. This can lead to students in lower social classes, or with lower familiarity with a task, to perform even worse than they would have. In other words, highlighting performance gaps with no explanation for the gap can make the gap even wider! However, making students aware of the fact that some are more familiar with the tasks, due to extra training, can mitigate these issues.

Working towards a more evidence informed Professional Development Review process.

My school is currently reviewing our PDR process. As the new head of senior maths this is a really crucial time for me to step up and try to bring some things to the table that will ensure that, as a team, the senior maths teachers are teaching in an evidence informed fashion.

I’m posting now, prior to submitting final ideas to our college, in order to share some thoughts and hopefully open up a discussion with others so that I can improve and optimise this process.

In partnership with my colleagues we’ve brought in a whole new instructional process this year at our senior college. At the moment we’re working on bedding it down, and having imput into the PDR process means ensuring that we’re all being asked by leadership to provide evidence for instructional practices that we actually think are going to contribute to student learning.

I’ve drafted the document below as a list of things that I myself would like to be measured against and I’m looking to take this to our maths team meeting soon to see if there’s anything that the team would like to add or subtract as we make our submission to leadership. (Hover over the top right of the doc to open in a new page).

I’d love any thoughts or comments on what I’ve put together thus far and how it can be improved.

Note: The ‘goals’ across the top come from our pre-existing PDR process. They’re non-negotiable so each of the elements I’ve included below will fit under those three goal headings (I’ll work out which goes where later, they’re each broad enough that alignment shouldn’t be an issue).

Note 2: SIM stands for ‘Sunshine Instructional Model’, we have a pre-established instructional model so I’ve just highlighted the main points that I think map really well onto that.

Any thoughts or comments appreciatively received : )

Ollie.

It’s not that they don’t care, it’s that they don’t think they can succeed

I just attended a lecture by Roy Beaumeister. It was a wide ranging talk about the past, the future, and how predictions and prospections of the future influence decision making. One experiment that Roy spoke of piqued when considering it in relation to what I’ve seen with my students, and their motivation, in the classroom.

The experiment had two conditions (let’s call them red and blue). To start off with, individuals in both conditions were asked to answer six questions. However,  the results were rigged such that individuals in the blue condition were told that two of their answers were correct, and those in the red condition were told that five of their six answers were correct. Then all subjects were asked to make a happiness forecast, they were asked a question like ‘We’re now going to give you six similar questions, how happy do you think you’d be next time if you got all six correct’*. Their happiness forecasts can be seen in the image below.

Screen Shot 2017-02-22 at 6.46.41 pm

That is, individuals who only got two questions correct the first time (blue) said something along the lines of ‘oh yeah, I guess I’d be kinda happy if I got all of the correct’, whereas those who got five correct the first time, and thought they had a pretty good chance of getting six correct, said something like ‘oh yeah, I’d be really quite happy to get six correct!’.

Then came the moment of truth. All were again presented with six questions and this time all participants were told that they got all six questions correct! So… how happy were they? Here are the results.

Office Lens 20170222-183320

When looking at this graph I thought about my own classroom. I thought about all of the students over the years who have said ‘I hate maths’ or ‘I don’t care about this anyway’. Could it be that it isn’t that these students don’t care, it isn’t that they hate maths, it’s just that they rate their chances of success so low that it’s a pragmatic decision for them to assume that they don’t care? This could in fact be a rational and calculated decision on their part that aims to lessen the pain of anticipated failure. Beaumister alluded to one of Aesop’s Fables

An hungry Fox with fierce attack
Sprang on a Vine, but tumbled back,
Nor could attain the point in view,
So near the sky the bunches grew.
As he went off, “They’re scurvy stuff,”
Says he, “and not half ripe enough–
And I ‘ve more rev’rence for my tripes
Than to torment them with the gripes.”
For those this tale is very pat
Who lessen what they can’t come at.

It’s not that they don’t care, it’s that they don’t think they can succeed. It’s our job as teachers to teach in such a way that these students experience success and, bit by bit, they’ll come to value success higher because they’ll believe it’s achievable, and they’ll be willing to invest more effort to attain it. The good news is, as the right two columns of the graph show, the further behind the students come from, the more they’ll enjoy the achievement when they get there!

*I’ve recounted this experiment as well as I can remember, but this is currently in press so I wasn’t able to go over it to fact check my recollection of Roy’s explanation of the study.

‘The blogosphere recapitulates the teacher’s career’

Earlier on today I came across a blog post from Michael Pershan collating a whole heap of golden posts from 2016 (from other bloggers), as well as a few interesting reflections. My interest was piqued when Michael wrote:

When I started reading and writing about teaching back in 2010-2011, it seemed to me that the vast majority of math teachers were blogging about the activities they made or used. Most people were embedding slides or worksheets, or describing progressions of questions they had used.

Michael then shared what he’s most been enjoying most from the year just past…

I think of many of my favorite posts from 2016 — like Lisa and Grace’s above — and they focus on relationships and culture. But how do you talk about relationships and culture? This is hard stuff! It’s what, perhaps, teachers will be blogging about more in years to come, but it’s not easy to figure out how to talk about. The language isn’t always there for us in the way it is about designing a great task.

I was struck by these ideas off the back of a conversation I had with another early career teacher today in the maths staffroom. My mate Ben and I were talking about our responsibility to our current Y12 students, and how our No. 1 goal is to prepare them for their end of year exams; ‘we can’t let them down’. As our brief chat ended and Ben swivelled back around to his desk, I was left thinking. I was struck by the gap between my main focus at the time of lesson planning, giving my students the skills required to have success in the exam, and the more grandiose teaching values I’ve espoused over time; develop questioning and critical thinkers, independent learners, considerate democratic citizens, (etc).

Rather than becoming depressed at this apparent gap between ideals and reality I was somewhat liberated by my inchoate understanding of the difference between novices and experts, and simultaneously somewhat tickled by a phrase that came to mind somewhere out of the recesses of my long term memory. That phrase was, ‘Ontogeny recapitulates phylogeny‘*, or as I took it to mean, ‘The developmental stages of the individual’s life mirror the evolutionary stages of their phylum’.

I realised that a parallel could be drawn between the above phrase and Michael’s observations regarding the development of the maths blogosphere. Maybe the blogosphere, in a way, mirrors the development that I currently feel myself going through. The equivalent phrase could be something like, ‘The blogosphere recapitulates the teachers’ career‘.

I’m at a time in my novice development where I’m really focussing on the ‘what’ of classroom instruction. As I develop and bed down the basics of good instruction, space will open up in my working memory (I hope) whilst I’m in the classroom by enough to enable me to ask more complex questions, instruct and guide the lesson more spontaneously, and think more about classroom culture and the other ‘hard stuff’.

Perhaps the we’ve just seen the development of the first cohort of maths bloggers move from novice to expert, with the content and focus of their posts naturally progressing from the concrete to the abstract. If it is fair to say that I’m part of a second generation then I’m just glad there’s now a (sometimes slightly overwhelming) repository of content spanning the entirety of the developmental progression that I can now sink my teeth into!

*It’s a nice quote, but it isn’t necessary true!

 

 

TOT001: What is Direct Instruction? Dylan Wiliam takeaways, and Building Habits

Find all other episodes of Teacher Ollie’s Takeaways here, find it on iTunes here, or on your favourite podcasting app by searching ‘Teacher Ollie’s Takeaways’. You may also like to check out Ollie’s other podcast, the Education Research Reading Room, here

This is the first ever episode of the Teacher Ollie’s Takeaways podcast, the podcast in which I summarise my key takeaways from twitter, blogs, research papers, conversations, and even my own classroom, from the week just past.

If you have any thoughts or comments after listening to this podcast, please share them with me via twitter: @ollie_lovell

Show Notes

John Hattie on Direct Instruction

“John (Hattie, 2009) defines direct instruction in a way that conveys an intentional, well-planned, and student-centered guided approach to teaching. “In a nutshell, the teacher decides the learning intentions and success criteria, makes them transparent to the students, demonstrates them by modeling, evaluates if they understand what they have been told by checking for understanding, and re-tells them what they have been told by tying it all together with closure”(p. 206).”

“When thinking of direct instruction in this way, the effect size is 0.59. Dialogic instruction also has a high effect size of 0.82. This doesn’t mean that teachers should always choose one approach over another. It should never be an either/ or situation. The bigger conversation, and purpose of this book, is to show how teachers can choose the right approach at the right time to ensure learning, and how both dialogic and direct approaches have a role to play throughout the learning process, but in different ways.”

“Precision teaching is about knowing what strategies to implement when for maximum impact.”

Some comments on my Masters Project…

“This study shows that, for under-achieving students, the bridge from mathematical challenge and disengagement to success and motivation is a fragile one, and the journey across it becomes more perilous the older a student gets. The ongoing challenge for teachers is to shore up and scaffold this fragile bridge’s structure, and to ensure that the scaffolding provided is appropriate to both the ‘who’ that is crossing, and the ‘when’ of their traverse.”

Tidbit

“Factor Game ( http:// www.tc.pbs.org/ teachers/ mathline/ lessonplans/ pdf/ msmp/ factor.pdf ) in which an understanding of primes and composites was crucial to developing strategies to win”

The Mr Barton Podcast with Dylan Wiliam

Original article here.

Reciprocal Teaching

Robert Slavin: When we encourage students to help each other, whilst there are great benefits to both students, the students who learn the most are the ones who do the most explaining.

The Relevance of Problem Contexts

Jo Boaler:
Q: ‘When do girls prefer football to fashion?’
A: When it’s the context of a maths question. Presented with a structurally identical maths question in two different contexts, girls do better than boys when the context is that of football (soccer). This is because they bring less irrelevant and confounding background knowledge into the solving process.

What is learning?

Paul Kirschner: Learning is a change in long term memory, Aka: if they don’t remember it in 6 weeks, they haven’t really learnt it.
Relatedly… John Mason: ‘Teaching takes place in time, but learning takes place over time.’

Ref: Jame’s Manion’s article, Learning is Meaningless.

We don’t actually Know what Good Teaching Looks Like!

Heather Hill: We need to stop kidding ourselves by thinking that we can pick a good or a bad teacher by observing them teach a class. Hill suggests they would need to be observed in 6 different classes by 5 different observers (a total of 30 observations) to obtain a reliable rating.

Edit: I emailed Heather Hill about this, and this is what she said: “hanks for your question. For my own instrument, it originally looked like we needed 4 observations each scored by 2 raters (see attached paper). However, Andrew Ho and colleagues came up with the 6 observations/5 observer estimates from MET data:” Ho’s paper

Dan Goldhaber: Comparing two models of ‘good teaching’ (a fixed effect and a random effect model) based upon ‘value added’ metrics, the best 9% of teachers as rated by one model were classified as the worst teachers in the other!
Dylan concludes that we can only really comment in the extremes, i.e., ‘We can be pretty sure that a teacher who appears to be very very good is in fact not very very bad, and we can be pretty sure that a teacher who appears very very bad is in fact not very very good.’, but that’s about the extent of it.
So… where to? Dylan says that team leaders should focus on one question: ‘What do you want to get better at and how can we do it?’. I’m (Ollie) a bit dubious about this and I think that team leaders could help by guiding efforts to areas where we can be pretty sure that they’ll have a positive effect on learning (more frequent assessment and better feedback, distribution of practice, better modelling, etc).

Thinking Hard and Distributed Practice

Robert Bjork: The harder you think about something the better you remember it. Relatedly, the best time to study something is at the point just before you’ve completely forgotten it!

Simple Hacks to improve Assessment

The hypercorrection effect: You get two benefits of assessment, the first is when the testee is forced to recall the information in the first place, this strengthens the synaptic connections. The second benefit is when they see the answer. Thus, in order to maximise learning, the best person to mark a test

Synoptic testing: Testing shizzle up to the point that you’re now up to!

Building habits (NY Times article)

Charles Duhigg’s TED talk.

“the core of every habit is a neurological loop with three parts: A cue, a routine and a reward.

The summary of this article is that you want to get to a point where the reward is internal, i.e., you don’t need any external input from yourself (or your students), to feel good about the habit that you’re trying to establish. However, the interesting thing that this NY times article points out, is that you can start of with an external reward, and use this to build the neuro-associations in such a way that the external reward will eventually be no longer required. I’ll read an excerpt from the article that provides a good example.

“If you want to start running each morning, it’s essential that you choose a simple cue (like always lacing up your sneakers before breakfast or always going for a run at the same time of day) and a clear reward (like a sense of accomplishment from recording your miles, or the endorphin rush you get from a jog). But countless studies have shown that, at first, the rewards inherent in exercise aren’t enough.

So to teach your brain to associate exercise with a reward, you need to give yourself something you really enjoy — like a small piece of chocolate — after your workout.

This is counterintuitive, because most people start exercising to lose weight. But the goal here is to train your brain to associate a certain cue (“It’s 5 o’clock”) with a routine (“Three miles down!”) and a reward (“Chocolate!”).

Eventually, your brain will start expecting the reward inherent in exercise (“It’s 5 o’clock. Three miles down! Endorphin rush!”), and you won’t need the chocolate anymore. In fact, you won’t even want it. But until your neurology learns to enjoy those endorphins and the other rewards inherent in exercise, you need to jump-start the process.

And then, over time, it will become automatic to lace up your jogging shoes each morning. You won’t want the chocolate anymore. You’ll just crave the endorphins. The cue, in addition to triggering a routine, will start triggering a craving for the inherent rewards to come”