TOT010: Sensitively ‘correcting’ students’ english, the limits of ‘evidence based’ education, and more

Teacher Ollie’s Takeaways is a weekly post (and sometimes a podcast!) bringing together all of the fascinating things that Ollie read throughout the week! Find all past posts of Teacher Ollie’s Takeaways here

 Sensitive Instruction of English

Really enjoyed this episode of the Cult of Pedagogy Podcast. It can be a challenge to know how to correct the culturally-based idiosyncrasies in our students’ speech in a culturally-sensitive way. Jennifer Gonzales and Dena Simmons discuss how to do this with both respect and finesse. Well worth a listen!

Challenging the fallacious Fact/Value divide in Education Research

I’m naturally a very quantitatively driven guy. I seem to be drawn to numerical metrics of success, sometimes missing the forest for the trees. Something I’ve been exploring a lot recently is the assumptions underlying much educational research. Here’s just one of the blog posts that I’ve found stimulating in this space in the last few weeks. I’ll hopefully blog more about this in the not too distant future.


On the ‘Fact/Value’ false dichotomy. 

The one side asserts the importance of facts and thinks you cannot argue rationally using evidence  about values so excludes them from science, the other side asserts the importance of values and agrees that these cannot be put on a proper research footing so exclude themselves from science. But what if the claim introduced so casually by Hume nearly 300 years ago is simply wrong? What if we can derive values from an investigation of the facts?

And values are always present

Values enter into research when we select what to look at, when we decide how to look at it and when we interpret the meaning of what we think we see.(Standish, 2001). So values are always implicit behind ‘experimental designs’.

Double loop learning!

Double Loop Learning

My suggestion from this example is that what appears to many researchers as an unbridgeable divide between facts and values within educational research is perhaps better understood as the real difference in quality and temporality of these two intertwined research loops. On the one hand focused empirical research within a theoretical framework that embeds values and on the other hand the rather larger and rather longer dialogue of theory that can question and develop the assumptions of research frames. Both loops can be seen as united in a larger conception of science as collective ‘wissenshaft’ or shared knowledge. Both call upon evidence from experience and both make arguments that seek to persuade. While research findings from the smaller loops of empirical science based on narrow frameworks of assumptions can seem to progress faster and be more certain for a while than the findings of the larger borderless transdisciplinary shared enquiry of humanity this is an illusion because in fact the cogency of the assumptions behind narrow research programmes depend upon justifications that can only be made within the larger dialogue.

This boils down to the fact that we need to ask ourselves… ‘more efficient at what?’


And here’s another quote from a Schoenfeld article on the same topic!

Do Comprehensive Schools Reduce Social Mobility?

Just a paper that I thought some readers might like to check out on this subject!

Boliver, V., & Swift, A. (2011). Do comprehensive schools reduce social mobility? 1. The British journal of sociology, 62(1), 89-110

Maybe the source of PISA discrepancy is in large part due to paper vs. computer based implementation!?!

 

A Behaviour Management Guide for School Leaders

Google Drive Tools for Teachers

Addressing issues of cultural and political sensitivity in the classroom

This article is well worth a read! Here are some of my favourite quotes…

“When it feels more partisan, we walk more of a tightrope. For the ‘alt-right,’ I didn’t feel we had to walk a tightrope,” said Leslie, who viewed teaching about the alt-right as akin to teaching about the KKK. Racism ought to be a non-partisan subject, she said.

Learning about the alt-right, for example, is a lesson in political literacy. Teachers should not ask students to decide whether the alt-right is a good thing, but they can teach how it came about and how it has affected the political system, Hess said.

 

ERRR #004. Paul Weldon, Teacher Supply and Demand, and Out of Field Teaching

Listen to all past episodes of the ERRR podcast here.


Paul Weldon  is a Senior Research Fellow with the Australian Council for Educational Research. He works on multiple different educational research programs and is commonly involved in program evaluation and the design, delivery and analysis of surveys. Through his work on the Staff in Australia’s Schools (SiAS) surveys in 2010 and 2013, Paul developed a particular interest in the teacher workforce. He was the lead writer of the most recent Victorian Teacher Supply and Demand Report, and led the recent AEU Victoria workload survey.

In this episode we talk to Paul about his two papers, The Teacher workforce in Australia: Supply, demand and data issues and Out-of-field teaching in Australian secondary schools. This episode’s discussion includes an in-depth examination of the ’30% of teachers leave with in the first 3 years and 50% within the first 5’ that’s often quoted in relation to retention of early career teachers, the landscape of teacher supply and demand out to 2020, as well as what the distribution of out of field teaching in Australia says about how we value our out of field teachers.

Links mentioned in the podcast:

 

Australian Policy Online. ‘a research database and alert service providing free access to full text research reports and papers, statistics and other resources essential for public policy development and implementation in Australia, New Zealand and beyond.’. 

Striving to create an evidence informed student feedback form.

Seeing as my students have to endure my presence, instructions, and bad jokes for 3 hours and 45 minutes each week, I figure the least I can do is give them an opportunity to tell me how I can make this task a little easier for them. In my first year of teaching I knocked together the below form. I’ve used it for a year now and it’s been really helpful to date. In particular, it’s helped me to bring more celebration into my classroom, with many students over the past year indicating that they want their successes to be celebrated more (usually with lollies!). 
Screen Shot 2017-04-01 at 6.27.41 pm

This has been great, but as I’ve moved into my role as head of senior maths this year it’s prompted me to think more strategically about student feedback, and the role it can play in my own, and my team’s professional development.

No feedback form is going to tell a teacher, or a team leader, everything they need to know in terms of ‘Where am I going? How am I going? Where to next?’, but I’ve been feeling more and more as thought these forms do have a key role to play in helping teachers to spot gaps, and  motivating and inspiring us to improve our praxis.

I was really happy with the willingness of my team to roll out the above form (Obviously with ‘Ollie’ changed to their individual names) in their own classes, and the insights gained were very illuminating. But coupling these feedback forms with my own observations provided and even bigger insight for me. It surprised me just how differently student (novices when it comes to principles of instruction) and I (a relative expert) view what happens in a classroom.

From this it’s became more apparent to me that if I want student feedback to more effectively drive my own professional development, I need to start asking better and more targeted questions that will allow me to see exactly where my teaching is excelling, and where I’m falling short.

So, here’s a first draft of the new feedback questions (which I’ll eventually turn into a google form). I’ve based it off the Sutton Trust’s article What makes great teaching? Review of the underpinning research, headed up by Robert Coe. I’ve used the first four out of the six “common components suggested by research that teachers should consider when assessing teaching quality.” (p. 2). These are the components rated as having ‘strong’ or ‘moderate’ evidence of impact on student outcomes, and they’re also the components with observable outcomes in the classroom (5 and 6 are ‘Teacher Beliefs’ and ‘Professional Behaviours’, which encapsulate practices like reflecting on praxis and collaborating with colleagues).

For each of the following I’ll get students to rate the sentence from 1, strongly disagree, to 5, strongly agree, in the hope that this will give me a better idea of how students interpret the various components of my teaching and teacher disposition.

I’ll also add a question at the end along the lines of ‘Is there anything else you’d like to add?’.

I’ve numbered the Qs to make it easy for people to make comments about them on twitter. This is a working document and today is the second day of our 2 week Easter break. I’m keen to perfect this as much as possible prior to Term 2. Please have a read and I’d love your thoughts and feedback  : )

Ollie.

Link to Twitter discussion here.

Edit: A copy of the live form can now be viewed at: http://tiny.cc/copyscstudentfeedback

Four (of the 6) components of great teaching (Coe et al., 2014). Questions.
1. (Pedagogical) content knowledge (Strong evidence of impact on student outcomes)

Student friendly language: Knowledge of the subject and how to teach it.

The most effective teachers have deep knowledge of the subjects they teach, and when teachers’ knowledge falls below a certain level it is a significant impediment to students’ learning. As well as a strong understanding of the material being taught, teachers must also understand the ways students think about the content, be able to evaluate the thinking behind students’ own methods, and identify students’ common misconceptions.

1.1 This teacher has a deep understanding of the maths that they teach you. They really ‘know their stuff’.

1.2 This teacher has a good understanding of how students learn. They really ‘know how to teach’.

 

If you have any comments on this teacher’s knowledge of the content and how to teach it, please write them below.

2. Quality of instruction (Strong evidence of impact on student outcomes)

Student friendly language: Quality of instruction

Includes elements such as effective questioning and use of assessment by teachers. Specific practices, like reviewing previous learning, providing model responses for students, giving adequate time for practice to embed skills securely and progressively introducing new learning (scaffolding) are also elements of high quality instruction.

 

2.1 This teacher clearly communicates to students what they need to be able to do, and how to do it.

2.2 This teacher asks good questions of the class. Their questions test our understanding and help us to better understand too.

2.3 This teacher gives us enough time to practice in class.

2.4 The different parts of this teacher’s lessons are clear. Students know what they should be doing at different times throughout this teacher’s lessons.

2.5 The way that this teacher assesses us helps both us and them to know where we’re at, what we do and don’t know, and what we need to work more on.

2.6 This teacher spends enough time revisiting previous content in class that we don’t forget it.

If you have any comments on the quality of this teacher’s instruction, please write them below.

3. Classroom climate (Moderate evidence of impact on student outcomes)

Student friendly language: Classroom Atmosphere and Student Relations

Covers quality of interactions between teachers and students, and teacher expectations: the need to create a classroom that is constantly demanding more, but still recognising students’ self-worth. It also involves attributing student success to effort rather than ability and valuing resilience to failure (grit).

 

3.1 Students in this teacher’s class feel academically safe. That is, they don’t feel they’ll be picked on (by teacher or students) if they get something wrong.

3.2 Students in this teacher’s class feel socially safe. That is, this teacher promotes cooperation and support between students.

3.3 Even if I don’t get a top score, if I try my best I know that this teacher will appreciate my hard work.

3.4 This teacher cares about every student in their class

3.5 This teacher has high expectations of us and what we can achieve.

If you have any comments on the atmosphere of this teacher’s classroom, or their student relations, please write them below.

4. Classroom management (Moderate evidence of impact on student outcomes)

Student friendly language: Classroom Management

A teacher’s abilities to make efficient use of lesson time, to coordinate classroom resources and space, and to manage students’ behaviour with clear rules that are consistently enforced, are all relevant to maximising the learning that can take place. These environmental factors are necessary for good learning rather than its direct components.

 

4.1 This teacher manages the class’ behavior well so that we can maximize our time spent learning.

4.2 There are clear rules and consequences in this teacher’s class.

4.3 This teacher is consistent in applying their rules.

4.4 The rules and consequences in this teacher’s class are fair and reasonable, and they help to support our learning.

4.5 Students work hard in this teacher’s class.

If you have any comments on this teacher’s classroom management, please write them below.

Final Open-ended Questions If you have any further comments or questions in relation to this teacher, please feel free to share them below.

 

 

TOT009:

Teacher Ollie’s Takeaways is a weekly post bringing together al of the fascinating things that Ollie read throughout the week! Find all past posts of Teacher Ollie’s Takeaways here

Astrophysicists and feminism

A great post, prompted by a meme shared for International Womens’ Day, on how young women aspiring to be astrophysicists is great, but os is little girls aspiring to be princesses…

What makes a good PD?

Turns out that almost all professional development for teachers fails, that is, it doesn’t have any measurable impact on student learning (great citations for this in this article). In the face of this, should we give up on PD all together? In this article @HfFletcherWood tells us some of the keys to good PD.

PISA and Technology in the Classroom

20 good youtube channels for Maths Teachers

The back and forth on explicit instruction

If you want to hear leaders in their field engaging in the constructivism vs. explicit instruction debate, the articles linked to in the comments of this article are a fantastic place to start. I’m working my way through them at the moment.

The performance of partially selective schools in England

Do partially selective schools improve results for students? Here’s a moderate scale study suggesting partially selective schools maybe don’t have such beneficial effects for those who attend…

Philosophy For Children. Effective or not?

Philosophy for Children is a program that aims to teach students how to think philosophically, and to improve oracy skills, and communication more broadly. Here’s a study attesting to its efficacy, see replies to this tweet for an alternative view…

The Mighty, A website highlighting the writing of Mighty People

Eloquent argument against the same old ‘new education’ assumptions

Tom Bennett argues agains a new film that rips on our educational system. Film states all the usual ‘stifles creativity’, ‘rote learning’, tropes. Great reply from Tom Bennett.

What to do when your child stares at another child with a disability?

Great post from Daniel Willingham. Hot tip, ensure it’s a social interaction. Follow the link for more.

Trump’s policies in perspective

Just because…

TOT008:

Find all other episodes of Teacher Ollie’s Takeaways here, find it on iTunes here, or on your favourite podcasting app by searching ‘Teacher Ollie’s Takeaways’. You may also like to check out Ollie’s other podcast, the Education Research Reading Room, here

Not a podcast this week, just a few notes on key takeaways : )

Seminal Papers in Educational Psychology.

Check them out!

Guide your teaching by setting questions that you want the students to be able to answer.

Birmo tweets about the new ‘My Induction’ app.

It’s pretty interesting, got some decent tips, and some good starting points for new teachers.

Collection of evidence on direct instruction.

This is gold! E.g., I knew I’d read somewhere in the last PISA report that inquiry learning was negatively associated with science outcomes, spent about 15 mins trying to re-find last week, then gave up. Low and behold, it’s right here!!!

Further dissecting Growth Mindset.

This has been a hot topic on Twitter recently. Here’s a collation of posts, well worth a look.

More evidence for Explicit Instruction in Maths

Effectiveness of Explicit and Constructivist Mathematics Instruction for Low-Achieving Students in the Netherlands

A must listen podcast!

I love the Mr. Barton Podcast, and this week was an absolute ripper. I can’t think of a better use of 2 hours of a teacher’s time than to listen to this!

How deep can a simple maths question take us?

A really simple maths questions, with some amazing results!

Here’s a sneak peek

Screen Shot 2017-03-09 at 8.55.05 pm

Screen Shot 2017-03-09 at 8.55.14 pm

Source: https://blogs.adelaide.edu.au/maths-learning/2016/04/12/quarter-the-cross/

Just for Fun. Pie Graphs in Action!!!

Want to see an elegant example of scaffolding?

How to help students to move from concrete examples to generalisations. This is a short and sweet classroom snapshot of how to do this incredibly effectively.

‘When will I ever use this?’: The ultimate comeback!!!

Thanks for joining me for another week with Teacher Ollie’s Takeaways : )

O.

TOT005: Why constructivism doesn’t work, evolution and cognition, the reliability of classroom observations, routines, and a classroom story

Find all other episodes of Teacher Ollie’s Takeaways here, find it on iTunes here, or on your favourite podcasting app by searching ‘Teacher Ollie’s Takeaways’. You may also like to check out Ollie’s other podcast, the Education Research Reading Room, here

Show Notes

Why minimal guidance during instruction doesn’t work

Ref: Kirschner, P. A., Sweller, J., & Clark, R. E. (2006). Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching. Educational Psychologist, 41(2), 75–86.

The arguments for and against minimally guided instruction

  • Assertion:

    The most recent version of instruction with minimal guidance comes from constructivism (e.g., Steffe & Gale, 1995), which appears to have been derived from observations that knowledge is constructed by learners and so (a) they need to have the opportunity to construct by being presented with goals and minimal information, and (b) learning is idiosyncratic and so a common instructional format or strategies are ineffective.

  • Response:

    “The constructivist description of learning is accurate, but the instructional consequences suggested by constructivists do not necessarily follow.”

Learners have to construct a mental schema of the information in the end, that’s what we’re trying to furnish them with, and it turns out, the less of a schema we give them (as with minimal guidance) the less complete of a schema they end up with. Essentially, if we give them the full picture, it will better help them to construct the full picture!

  • Assertion:

    Another consequence of attempts to implement constructivist theory is a shift of emphasis away from teaching a discipline as a body of knowledge toward an exclusive emphasis on learning a discipline by experiencing the processes and procedures of the discipline (Handelsman et. al., 2004; Hodson, 1988). This change in focus was accompanied by an assumption shared by many leading educators and discipline specialists that knowledge can best be learned or only learned through experience that is based primarily on the procedures of the discipline. This point of view led to a commitment by educators to extensive practical or project work, and the rejection of instruction based on the facts, laws, principles and theories that make up a discipline’s content accompanied by the use of discovery and inquiry methods of instruction.

  • Response:

    …it may be a fundamental error to assume that the pedagogic content of the learning experience is identical to the methods and processes (i.e., the epistemology) of the discipline being studied and a mistake to assume that instruction should exclusively focus on methods and processes. (see Shulman (1986; Shulman & Hutchings, 1999)).

This gets to the heart of the distinction between experts and novices. Experts and novices simply don’t learn the same way. They don’t have the same background knowledge at their disposal. By teaching novices in the way that experts should be taught we’re really doing them a disservice, overloading working memories, and simply being ineffective teachers.

Drilling down to the evidence:

None of the preceding arguments and theorizing would be important if there was a clear body of research using controlled experiments indicating that unguided or minimally guided instruction was more effective than guided instruction.. Mayer (2004) recently reviewed evidence from studies conducted from 1950 to the late 1980s comparing pure discovery learning, defined as unguided, problem-based instruction, with guided forms of instruction. He suggested that in each decade since the mid-1950s, when empirical studies provided solid evidence that the then popular unguided approach did not work, a similar approach popped up under a different name with the cycle then repeating itself. Each new set of advocates for unguided approaches seemed either unaware of or uninterested in previous evidence that unguided approaches had not been validated. This pattern produced discovery learning, which gave way to experiential learning, which gave way to probem-based and inquiry learning, which now gives way to constructivist instructional techniques. Mayer (2004) concluded that the “debate about discovery has been replayed many times in education but each time, the evidence has favored a guided approach to learning” (p. 18).

Current Research Supporting Direct Guidance

List is too long, here are some excerpts

Aulls (2002), who observed a number of teachers as they implemented constructivist activities…He described the “scaffolding” that the most effective teachers introduced when students failed to make learning progress in a discovery set- ting. He reported that the teacher whose students achieved all of their learning goals spent a great deal of time in instructional interactions with students.

Stronger evidence from well-designed, controlled experi- mental studies also supports direct instructional guidance (e.g., see Moreno, 2004; Tuovinen & Sweller, 1999).

Klahr and Nigam (2004) tested transfer following discovery learning, found that those relatively few students who learned via discovery ‘showed no signs of superior quality of learning’.

Re-visiting Sweller’s ‘Story of a Research Program. 

From last week: Goal free effect, worked example effect, split attention effect.

My post from this week on trying out the goal free effect in my classroom.

See full paper here.

David Geary provided the relevant theoretical constructs (Geary, 2012). He described two categories of knowledge: biologically primary knowledge that we have evolved to acquire and so learn effortlessly and unconsciously and biologically secondary knowledge that we need for cultural reasons. Examples of primary knowledge are learning to listen and speak a first language while virtually everything learned in educational institutions provides an example of secondary knowledge. We invented schools in order to provide biologically secondary knowledge. (pg. 11)

For many years our field had been faced with arguments along the following lines. Look at the ease with which people learn outside of class and the difficulty they have learning in class. They can accomplish objectively complex tasks such as learning to listen and speak, to recognise faces, or to interact with each other, with consummate ease. In contrast, look at how relatively difficult it is for students to learn to read and write, learn mathematics or learn any of the other subjects taught in class. The key, the argument went, was to make learning in class more similar to learning outside of class. If we made learning in class similar to learning outside of class, it would be just as natural and easy.

How might we model learning in class on learning outside of class? The argument was obvious. We should allow learners to discover knowledge for themselves without explicit teaching. We should not present information to learners – it was called “knowledge transmission” – because that is an unnatural, perhaps impossible, way of learning. We cannot transmit knowledge to learners because they have to construct it themselves. All we can do is organize the conditions that will facilitate knowledge construction and then leave it to students to construct their version of reality themselves. The argument was plausible and swept the education world.

The argument had one flaw. It was impossible to develop a body of empirical literature supporting it using properly constructed, randomized, controlled trials

The worked example effect demonstrated clearly that showing learners how to do something was far better than having them work it out themselves. Of course, with the advantage of hindsight provided by Geary’s distinction between biologically primary and secondary knowledge, it is obvious where the problem lies. The difference in ease of learning between class-based and non-class-based topics had nothing to do with differences in how they were taught and everything to do with differences in the nature of the topics.

If class-based topics really could be learned as easily as non-class-based topics, we would never have bothered including them in a curriculum since they would be learned perfectly well without ever being mentioned in educational institutions. If children are not explicitly taught to read and write in school, most of them will not learn to read and write. In contrast, they will learn to listen and speak without ever going to school.

Re-visit Heather Hill.

I asked: Dylan William quotes you and says ‘Heather Hill’s – http://hvrd.me/TtXcYh – work at Harvard suggested that a teacher would need to be observed teaching 5 different classes, with every observation made by made by 6 independent observers to reduce chance to really be able to reliable judge a teacher.’

Heather replied.

Thanks for your question about how many observations are necessary. It really depends upon the purpose for use.

1. If the use is teacher professional development. I wouldn’t worry too much about score reliability if the observations are used for informal/growth purposes. It’s much more valuable to have teachers and observers actually processing the instruction they are seeing, and then talking about it, than to be spending their time worrying about the “right” score for a lesson.

That principle is actually the basis for our own coaching program, which we built around our observation instrument (the MQI):

http://mqicoaching.cepr.harvard.edu

The goal is to have teachers learn the MQI (though any instrument would do), then analyze their own instruction vis-a-vis the MQI, and plan for improvement by using the upper MQI score points as targets. So for instance, if a teacher concludes that she is a “low” for student engagement, she then plans with her coach how to become a “mid” on this item. The coach serves as a therapist of sorts, giving teachers tools, cheering her on, and making sure she stays on course rather than telling the teacher exactly what to do. During this process, we’re not actually too concerned that either the teacher (or even coach) scores correctly; we do want folks to be noticing what we notice, however, about instruction. A granular distinction, but one that makes coaching much easier.

2. If the use is for formal evaluation. Here, score reliability matters much more, especially if there’s going to be consequential decisions made based on teacher scores. You don’t want to be wrong about promoting a teacher or selecting a coach based on excellent classroom instruction. For my own instrument, it originally looked like we needed 4 observations each scored by 2 raters (see a paper I wrote with Matt Kraft and Charalambos Charalambous in Educational Researcher) to get reliable scores. However, my colleague Andrew Ho and colleagues came up with the 6 observations/5 observer estimates from the Measures of Effective Teaching data:

http://k12education.gatesfoundation.org/wp-content/uploads/2015/12/MET_Reliability-of-Classroom-Observations_Research-Paper.pdf

And looking at our own reliabliity data from recent uses of the MQI, I tend to believe his estimate more than our own. I’d also add that better score reliability can probably be achieved if a “community of practice” is doing the scoring — folks who have taken the instrument and adapted it slightly to their own ideas and needs. It’s a bet that I have, but not one that I’ve tested (other than informally).

The actual MQI instrument itself and its training is here:

http://isites.harvard.edu/icb/icb.do?keyword=mqi_training

We’re always happy to answer questions, either about the instrument, scoring, or the coaching.

Best,
Heather

Routines.

Post from Gary Jones: Do you work in a ‘stupid’ school on functional stupidity and how smart people end up doing silly things that result in all sorts of bad outcomes, one of which is poor instruction for students.

Here are two of the 7 routines that the post highlighted for avoiding functional stupidity (originally from ALVESSON, M. & SPICER, A. 2016. The stupidity paradox: The power and pitfalls of functional stupidity at work.).

Newcomers find ways of taking advantage of the perspective of new members of staff and their ‘beginners mind.’  Ask them: What seems strange or confusing? What’s different? What could be done differently?

Pre-mortems – work out why a project ‘failed’ before you even start the project.  See for http://evidencebasededucationalleadership.blogspot.com/2016/11/the-school-research-lead-premortems-and.html more details

 

From the classroom…

TOT #004. John Sweller’s Cognitive Load Theory, Using Question Stems, and What does Ed in Australia Need?

Find all other episodes of Teacher Ollie’s Takeaways here, find it on iTunes here, or on your favourite podcasting app by searching ‘Teacher Ollie’s Takeaways’. You may also like to check out Ollie’s other podcast, the Education Research Reading Room, here

Show Notes

Cognitive Load Theory, John Sweller.

Edit: See a blog post of mine on trying out some CLT informed practices in my classroom here.

Wiliam then posted a link to Sweller’s article entitled ‘Story of a Research Program‘. The following excerpts are from that article.

It starts off biographically,

I was born in 1946 in Poland to parents who, apart from my older sister, were their families’ sole survivors of the Holocaust.

With touches of dry humour…

At school, I began as a mediocre student who slowly deteriorated to the status of a very poor student by the time I arrived at the University of Adelaide…. 

Initially, I enrolled in an undergraduate dentistry course but never managed to advance beyond the first year. While I am sure that was a relief to the Dental Faculty, it also should be a relief to Australian dental patients.

Given the physical proximity of the teeth and brain, I decided next to try my luck at psychology. It was a good choice because my grades immediately shot up from appalling back to mediocre, where they had been earlier in my academic career. I decided I wanted to be an academic.

Sweller eventually ended up at UNSW. Then he details the seminal experiment. 

After several non-descript experiments, I saw some results that I thought might be important. I, along with research students Bob Mawer and Wally Howe, was running an experiment on problem solving, testing undergraduate students (Sweller, Mawer, & Howe, 1982). The problems required students to transform a given number into a goal number where the only two moves allowed were multiplying by 3 or subtracting 29.

Each problem had only one possible solution and that solution required an alternation of multiplying by 3 and subtracting 29 a specific number of times. For example, a given and goal number might require a 2-step solution requiring a single sequence of: x 3, – 29 to transform the given number into the goal number. Other, more difficult problems would require the same sequence consisting of the same two steps repeated a variable number of times.

My undergraduates found these problems relatively easy to solve with very few failures, but there was something strange about their solutions. While all problems had to be solved by this alternation sequence very few students discovered the rule, that is, the solution sequence of alternating the two possible moves. Whatever the problem solvers were doing to solve the problems, learning the alternating solution sequence rule did not play a part.

Cognitive load theory probably can be traced back to that experiment.

But this was an isolated case. Sweller needed to demonstrate it in an educational context. Research was taken to the fields of maths and physics education, and it did indeed show the effect. I’ll talk briefly about  some of the Cognitive Load Effects in education, and we’ll save some more for the next two or three episodes of TOT. 

The Goal Free Effect: 

If working memory during problem solving was overloaded by attempts to reach the problem goal thus preventing learning, then eliminating the problem goal might allow working memory resources to be directed to learning useful move combinations rather than searching for a goal. Problem solvers could not reduce the distance between their current problem state and the goal using means-ends analysis if they did not have a specific goal state. Rather than asking learners to “find Angle X” in a geometry problem, it might be better to ask them to “find the value of as many angles as possible”.

A couple of other effects are worth noting, these are the worked example effect, the split-attention effect.

Using Question Stems in the Classroom

Jennifer Gonzalez’s ‘Is Your Classroom Academically Safe?’

Gonzalez’s question stems to scaffold student questioning:

  • This is what I do understand… (summarize up to the point of misunderstanding)
  • Can you tell me if I’ve got this right? (paraphrasing current understanding)
  • Can you please show another example?
  • Could you explain that one more time?
  • Is it ______ or _________? (identifying a point of confusion between two possibilities)

I said:

  • What is ___ in the diagram
  • Am I right in thinking that ___
  • What’s the difference between ___ and ___

Would love more suggestions.

What Would it Take to Fix Education in Australia?

Full article here, but I’ll just talk briefly about two comments made in question time.

Larissa made an interesting point on the role of literacy. Following up on a question from Maxine McKew on the inclusion of Australian literature in Australian schools, she suggested that the literature studied in schools must represent the diversity of our Australian society. If we don’t do this then we’re effectively saying to vast swathes of our society ‘You do not have a place here’.

Glenn: There’s a misalignment between the locus of policy making and the locus of accountability in Australia. We’ve increasingly got federal bodies making decisions that have implications for education right across the country (locus of policy making), whereas the accountability to the impacts of these decisions actually falls not at the federal level but at the state levels. Fundamentally this is a broken feedback loop (my terminology) that undermines improvements and accountability right throughout the system.

Several times whilst I was listening to this very high level discussion on education a quote came to mind that I heard a couple of years ago,  ‘If you change what happens in your classroom, you are changing the education system.’

TOT #003. A student reflects on learning strategies, Edu podcasts for kids, computers vs. paper for note taking, and the rise of randomised control trials.

Find all other episodes of Teacher Ollie’s Takeaways here, find it on iTunes here, or on your favourite podcasting app by searching ‘Teacher Ollie’s Takeaways’. You may also like to check out Ollie’s other podcast, the Education Research Reading Room, here

Show Notes

A Student tries out effective learning strategies

Original Author,Syeda Nizami

The Strategies: Spaced Practice, Retrieval Practice, Elaboration, Interleaving, Concrete Examples, Dual Coding

“Overall, each of the six strategies had their strengths and weaknesses, and it somewhat depends on which method is preferable to you, but I think the two that are truly essential are retrieval practice and spacing. Retrieval practice was and is my preferred way of studying for a quiz or exam, but this experience made me realize how truly useful it is. To be perfectly honest, spacing was a strategy I had never tried out before, even though teachers had always stressed that cramming wasn’t effective.”

Edu Podcasts for Kids (or for inspiration!)

The Show about Science: This science interview show is hosted by 6-year-old Nate, and while it has some serious science chops, it’s also just plain adorable. Nate talks to scientists about everything from alligators to radiation to vultures, in his distinctly original interviewing style.

Episode on Ants! Nate’s first interview : ) 

Are laptops and tablets a help or a hindrance to note taking?

The Impact of Computer Usage on Academic Performance: Evidence from a Randomized Trial at the United States Military Academy (Carter, Greenberg and Walker, 2016)

We present findings from a study that prohibited computer devices in randomly selected classrooms of an introductory economics course at the United States Military Academy. Average final exam scores among students assigned to classrooms that allowed computers were 18 percent of a standard deviation lower than exam scores of students in classrooms that prohibited computers. Through the use of two separate treatment arms, we uncover evidence that this negative effect occurs in classrooms where laptops and tablets are permitted without restriction and in classrooms where students are only permitted to use tablets that must remain flat on the desk surface.

One of the highlights of my day at researchED Amsterdam was hearing Paul Kirschner speak about edu-myths. He began his presentation by forbidding the use of laptops or mobile phones, explaining  that taking notes electronically leads to poorer recall than handwritten notes. The benefits of handwritten over typed notes include better immediate recall as well as improved retention after 2 weeks. In addition, students who take handwritten notes are more like to remember facts but also to have better future understanding of the topic. Fascinatingly, it doesn’t even matter whether you ever look at these notes – the simple act of making them appears to be beneficial.

The rise of Randomised Controlled Trials

Original article by Robert Slavin, told us about reciprocal teaching effects in TOT001.

A nice quote to end on

 

TOT #002. Teaching ‘The Scientific Method’, Growth Mindset Hoax? Instructional Techniques, Class Sizes, and Addressing Visible Disadvantage

Find all other episodes of Teacher Ollie’s Takeaways here, find it on iTunes here, or on your favourite podcasting app by searching ‘Teacher Ollie’s Takeaways’. You may also like to check out Ollie’s other podcast, the Education Research Reading Room, here

Show Notes

Teaching ‘the scientific method’

Superb post from @mfordhamhistory, on how we can teach students the discipline through a curriculum of case studies: https://t.co/Akgpv6D3NT

— Harry Fletcher-Wood (@HFletcherWood) January 10, 2017

Original post by Michael Fordham

‘1. Disciplines are characterised as much by their internal differences as their similarities.

2. There is no Platonic ideal of each discipline

3. Generalised models of disciplines rarely reflect what happens on the ground

All of these points lead me to great scepticism about curriculum theories in history, science or other disciplines that work by distilling the ‘essence’ from those disciplines, and teaching those. I am not all convinced that we can teach children ‘the scientific method’ in a general sense before they have learnt a number of cases of scientific research in practice.

History teachers have produced numerous examples of this over the last few years. Steve Mastin, for example, designed a scheme of work in which he taught his pupils how one historian (Eamon Duffy) had worked with a particular body of source material to answer questions about the impact of the reformation in England. Rachel Foster has a similarly well-cited example where she designed a scheme of work around the way two different historians (Goldhagen and Browning) had interpreted the same source material (a report from a police battalion involved in the Holocaust) in quite different ways. In examples such as these, children are taught about a specific example of where historians have undertaken research. Over time, as pupils learn more and more cases of disciplinary practice, we can then teach them the similarities and differences between different approaches: we thus end with abstract ideas, rather than beginning with them.

This means that I would suggest the following as an alternative way of teaching disciplinary practice to school children. Rather than distil some general, abstract ideas about ‘how the discipline works’, we would be better off specifying a range of specific cases of disciplinary practice for children to learn, from which we can as teachers tease out the similarities and differences in approach that characterise our respective disciplines.’

Is Growth Mindset a Hoax?

Original article by Tom Chivers, about hype of Growth mindset, being able to do everything from help struggling students to bring peace to the middle east.

‘Scott Alexander, the pseudonymous psychiatrist behind the blog Slate Star Codex, described Dweck’s findings as “really weird”, saying “either something is really wrong here, or [the growth mindset intervention] produces the strongest effects in all of psychology”.
He asks: “Is growth mindset the one concept in psychology which throws up gigantic effect sizes … Or did Carol Dweck really, honest-to-goodness, make a pact with the Devil in which she offered her eternal soul in exchange for spectacular study results?”

Strongest evidence from Timothy Bates’ research…

‘Bates told BuzzFeed News that he has been trying to replicate Dweck’s findings in that key mindset study for several years. “We’re running a third study in China now,” he said. “With 200 12-year-olds. And the results are just null.

“People with a growth mindset don’t cope any better with failure. If we give them the mindset intervention, it doesn’t make them behave better. Kids with the growth mindset aren’t getting better grades, either before or after our intervention study.”

Dweck told BuzzFeed News that attempts to replicate can fail because the scientists haven’t created the right conditions. “Not anyone can do a replication,” she said. “We put so much thought into creating an environment; we spend hours and days on each question, on creating a context in which the phenomenon could plausibly emerge.’

Reply by Scott Alexander. http://slatestarcodex.com/2017/01/14/should-buzzfeed-publish-information-which-is-explosive-if-true-but-not-completely-verified/

‘it mentions a psychologist Timothy Bates who has tried to replicate Dweck’s experiments (at least) twice, and failed. This is the strongest evidence the article presents. But I don’t think any of Bates’ failed replications have been published – or at least I couldn’t find them. Yet hundreds of studies that successfully demonstrate growth mindset have been published. Just as a million studies of a fake phenomenon will produce a few positive results, so a million replications of a real phenomenon will produce a few negative results. We have to look at the entire field and see the balance of negative and positive results. The last time I tried to do this, the only thing I could find was this meta-analysis of 113 studies which found a positive effect for growth mindset and relatively little publication bias in the field.’

‘I guess my concern is this: the Buzzfeed article sounds really convincing. But I could write an equally convincing article, with exactly the same structure, refuting eg global warming science. I would start by talking about how global warming is really hyped in the media (true!), that people are making various ridiculous claims about it (true!), interview a few scientists who doubt it (98% of climatologists believing it means 2% don’t), and cite two or three studies that fail to find it (98% of studies supporting it means 2% don’t). Then I would point out slight statistical irregularities in some of the key global warming papers, because every paper has slight statistical irregularities. Then I would talk about the replication crisis a lot.’

‘Again, this isn’t to say I believe in growth mindset. I recently talked to a totally different professor who said he’d tried and failed to replicate some of the original growth mindset work (again, not yet published). But we should do this the right way and not let our intuitions leap ahead of the facts.

I worry that one day there’s going to be some weird effect that actually is a bizarre miracle. Studies will confirm it again and again. And if we’re not careful, we’ll just say “Yeah, but replication crisis, also I heard a rumor that somebody failed to confirm it,” and then forget about it. And then we’ll miss our chance to bring peace to the Middle East just by doing a simple experimental manipulation on the Prime Minister of Israel.’

Using private school instructional techniques in a public school

Greg Ashman pointed me to an article by Joe Kirby on how public schools can adopt some of the practices that high achieving private schools implement, without the massive cost barriers.

e.g., ‘Teaching writing is heavily guided, even up to sixth form. In History, for instance, starting point sentences are shared for each paragraph of complex essays on new material. Extensive written guidance is shared with pupils. Sub-questions within each paragraph and numerous facts are also shared.’

Does class size matter?

Original article by Valerie Strauss
(read whole article)

How do visible disadvantage impact student outcomes?

Original post by Megan Smith.

Asking students to raise their hand to signal their achievement (when they knew an answer) highlights differences in performance between students, making it more visible. This can lead to students in lower social classes, or with lower familiarity with a task, to perform even worse than they would have. In other words, highlighting performance gaps with no explanation for the gap can make the gap even wider! However, making students aware of the fact that some are more familiar with the tasks, due to extra training, can mitigate these issues.

Working towards a more evidence informed Professional Development Review process.

My school is currently reviewing our PDR process. As the new head of senior maths this is a really crucial time for me to step up and try to bring some things to the table that will ensure that, as a team, the senior maths teachers are teaching in an evidence informed fashion.

I’m posting now, prior to submitting final ideas to our college, in order to share some thoughts and hopefully open up a discussion with others so that I can improve and optimise this process.

In partnership with my colleagues we’ve brought in a whole new instructional process this year at our senior college. At the moment we’re working on bedding it down, and having imput into the PDR process means ensuring that we’re all being asked by leadership to provide evidence for instructional practices that we actually think are going to contribute to student learning.

I’ve drafted the document below as a list of things that I myself would like to be measured against and I’m looking to take this to our maths team meeting soon to see if there’s anything that the team would like to add or subtract as we make our submission to leadership. (Hover over the top right of the doc to open in a new page).

I’d love any thoughts or comments on what I’ve put together thus far and how it can be improved.

Note: The ‘goals’ across the top come from our pre-existing PDR process. They’re non-negotiable so each of the elements I’ve included below will fit under those three goal headings (I’ll work out which goes where later, they’re each broad enough that alignment shouldn’t be an issue).

Note 2: SIM stands for ‘Sunshine Instructional Model’, we have a pre-established instructional model so I’ve just highlighted the main points that I think map really well onto that.

Any thoughts or comments appreciatively received : )

Ollie.

Edit, I have replaced the original version with the most recent version, as attached below.