Author Archives: oliverlovelltas

About oliverlovelltas

3242$#@#$kasnlkn

How do we know what to put on the quiz?

I’ve really enjoyed working my way through Brian Penfounds series of three (1, 2, 3) blogposts in his Journey to Interleaved Practice series recently. They detail how, prompted by a discussion with the Learning Scientists, Brian has been incorporating interleaving into his integral calculus class.

One particular instrument got me excited, it’s an excel spreadsheet that can be used to interleave questions when you’re planning both lessons and quizzes (see the blank version here and Brian’s version here). Here’s a screenshot to give you a taster.

Screen Shot 2017-04-20 at 9.11.59 pm

Being the focussed (and sometimes obsessed) learning strategist that I am, I really loved this idea. But it got me thinking, is this better than what I’m already doing? Should I adapt my current practice to incorporate this approach?

I’ve written about my assessment  and feedback process before here , in which I talk about the weekly quizzes that I give students, and how they incorporate content from the previous three weeks. This means that students see content for a month in a row (in the teaching week, then in the three weeks after that), then they’ll see it in the unit test (maximum 4 weeks later, as each topic is approx. 8 weeks long) then in the mid-year practice exam, then in the end of year exam.

I wanted to take the opportunity to share how I actually choose which questions to put on these weekly tests (or ‘Progress Checks’ (PC) as they’re called in my classes).

Each week I run the PC, students self mark in class immediately after, then I collect up the PCs. I keep them overnight and return them to students the next day (for two of my classes, the third class waits for 3 days due to timetabling) and in the meantime I enter the marks into my gradebook. When I return the PCs to students (I do this once they’ve settled into some question work), I carry around a little notebook and have a mini-conference with each student, the questions I ask are generally

“How do you feel you went?”

‘What did you get wrong?”

“What mistake did you make?”

“How much prep did you do for this Progress Check?”

And finally

“Which question numbers did you get wrong”.

From that, I collate the following.

MAFPCW5_Hard Qs (de-identified)

(For any student who doesn’t demonstrate that they prepared for the PC, they get a detention, which I also note on this sheet).

I then take a photo of this and store it with the progress check itself, like so.

Screen Shot 2017-04-20 at 9.23.20 pm

Then, when it comes time to write the next week’s PC, I feed in the questions that were answered incorrectly (variations thereof) as well as new content, in addition to other concepts from the previous 3 weeks that I think also important to touch on again.

I was really excited by the excel approach, but I’m still very attached to the adaptive approach that I’m using. Perhaps the optimum would lie somewhere in-between, using both a more-complex structure than ‘the last 3 weeks’ (such as is offered by the excel spreadsheet), plus some element of adaptability to the questions and concepts that students are clearly struggling with.

An opportunity for further exploration!

TOT011: Teaching Curiosity, Is Pre-questioning effective? Interrelations between PCK and DI, and Illusory Superiority.

Teacher Ollie’s Takeaways is a weekly post (and sometimes a podcast!) bringing together all of the fascinating things that Ollie read throughout the week! Find all past posts of Teacher Ollie’s Takeaways here

How do we teach Curiosity?

In this blog post Michael Fordham writes that we can’t teach curiosity in the abstract, we need to teach students things that they can therefore be curious about.

Here’s an excerpt.

On this line, when I say that I want to cultivate the curiosity of my pupils, what I am in practice saying is that I want them to be curious about a greater range of things. I want to bring more parts of our reality into the realm of their experience. I cannot make them more or less curious per se: what I can do is give them more things to be curious about.

This is why memories are so important to me. A pupil who has remembered some of the things I taught her about neoclassical architecture is more likely to be curious about a building built in that style. Indeed, she may well be more likely to be curious about a building not built in that style. Another pupil who remembers something I taught him about the causes of cholera in the nineteenth century might have his ears prick up when he hears about an outbreak, or reads about it somewhere else. This is in part what I think people mean when they say that knowledge breeds more knowledge.

Should we use pre-questions?

This is a fantastic article detailing a study by Carpenter and Toftness that explores whether or not we should ask pre questions. Here’s what it found.

  1. The benefit of prequestioning prior to reading is that it improves students’ retention of the information that was asked about
  2. The cost of prequestioning prior to reading is that it reduces student’s retention of information that wasn’t asked about
  3. Interestingly, when pre-questioning for video we see a boost of retention of both prequestioned and non-prequestioned information.

So why is this?

Authors suggest that it could be because when an individual is reading, it’s easier for them to ignore information and focus on the pre-questioned information. When watching a video, the effect is for students to simply focus harder the whole time.

Podcast with Michaela Head of Mathematics, Dani Quinn

Well worth a listen, I’ll leave it at that.

The more a teacher knows about how to teach their subect, the more they should use direct instruction

In this post, Greg Ashman outlines how the work of Agodini and colleagues pitted two constructivist based approaches against two direct instruction approaches to middle years maths instruction (in a RCT). A recent analysis of their results by Eric Taylor found that for teachers who scored lower on a ‘Mathematical Knowledge for Teaching’ test (i.e., PCK), there was less difference between the outcomes of the constructivist and the DI methods. Ashman explains this as follows.

In a program where the teacher has to stand up and actually teach maths, their maths skills matter, but when the students have to figure things out for themselves then the more skilled teachers have no way of making use of their greater skill level.

And from this, Ashman makes the following suggestions.

  1. Primary teachers must pass a maths skills test if they are to teach mathematics (schools could perhaps reorganise so that maths was taught by specialists to get around the problem of getting all teachers to this level)
  2. Primary teachers who lack maths skills should be given training in this area
  3. Explicit programs for teaching maths should be adopted in primary schools

How to rebut an argument with style

With Name-calling at the lowest rung on the disagreement hierachy we move through Ad Hominem, Responding to Tone, Contradiction, Counterargument, Refutation, and conclude with Refuting the Central Point. A relevant article in these times of online jousting.

Why do some Immigrants Excel in Science?

The study by Marcos Rangel reported in the article found that a particular set of characteristics were associated with immigrant students doing particularly well in Science. The article states.

Bacolod and Rangel subdivided the immigrants in two ways. First, whether they arrived in early childhood, before age 10. Second, whether their native language was linguistically close to English — say, German — or less similar — say, Vietnamese. Most linguists agree that these two factors have a dramatic impact on someone’s chances of becoming perfectly fluent in a second language…

…among the subset of immigrants who attended college, the ones who arrived later and from more linguistically distinct places — think the Vietnamese teen, not the German toddler — were many times more likely to major in a STEM field.

The authors argue that this is simply specialisation suggesting “If it were just as easy for me to write with my left hand as with my right, I would be using both. But no, I specialize,”. So, in many ways, it appears to be a very rational decision. For those learning a second language later on in life, the greatest chance at success is to focus on an area where a potential language differential less threatens to be an achilles heel.

Hey teacher, are you really as good as you think at explaining things?

In this post, Ben Newmark details his somewhat humbling journey to clearer explanations for his students, and the role that videotaping himself played in this journey.

To remember: the phrase ‘Illusory superiority” coined in 1991 by Van Vperen. We tend to overestimate our abilities in relation to others.

Assessment feedback: Processes to ensure that students think!

We know that ‘Memory is the residue of thought’ (Daniel Willingham) and that in order for our students to learn they must actively think about the content to be learnt. This allows this content to occupy their working memory for long enough, and become anchored to sufficient elements in their long term memory, to trigger a change in long term memory, one of the well respected definitions of ‘learning’ (Paul Kirschner).

One of the arenas of teaching in which this can be most challenging is that of feedback delivery to students. Dylan Wiliam sums it up well in the following quote (Which I came across thanks to Alfie Kohn).

Note: The original quote is “When students receive both scores and comments, the first thing they look at is their score, and the second thing they look at is…someone else’s score”, and can be found here (beware the paywall). 

The challenge is, then, how do we give feedback to our students in a way that encourages them to actively think about their mistakes, and helps them to do better next time?

In the following I’ll share how I give feedback to students in two contexts. The first is on low stakes assessments that I carry out in my own classroom, the second is on major assessment pieces that contribute towards their final unit mark.

Assessment Feedback on weekly Progress Checks.

Before we dive in I’ll just paint a picture of how my weekly ‘Progress Checks’ fit into my teaching and learning cycle, and how each of these elements is informed by education theory.

At the start of each week students are provided with a list of ‘weekly questions’. They know that the teaching during the week will teach them how to answer these questions. Questions are to be aligned with what we want students to be able to do (curriculum and exams) (Backwards Design). Students are provided with worked solutions to all questions at the time of question distribution (The worked example effect). The only homework at this stage of the cycle is for students to ensure that they can do the weekly questions.

Progress Checks’ (mini tests, max 15 minutes) are held weekly (Testing Effect). Progress checks include content from the previous three weeks. This means that students see the main concepts from each week for a month (Distributed Practice). These PCs are low-stakes for year 11 students (contribute 10% to their final overall mark) and are simply used to inform teachers and students of student progress in year 12 (where assessment protocols are more specifically defined).

Edit: Here’s a new post on how I use student responses to these PCs to construct the next PCs. 

When designing the progress checks I had two main goals: 1) Ensure that students extract as much learning as possible from these weekly tests, 2) Make sure that marking them didn’t take up hours of my time. The following process is what I came up with.

Straight after the PC I get students to clear their desks, I hand them a red pen, and I do a think-alound for the whole PC and get them to mark their own papers. This is great because it’s immediate feedback and self marking (See Dylan Wiliam’s black box paper), and it allows me to model the thinking of a (relative) expert, and to be really clear about what students will and won’t receive marks for. Following this, for any student who didn’t attain 100% on the progress check, they choose one question that they got incorrect and do a reflection on it based on 4 questions: 1) What was the q?, 2) Which concept did this address?, 3) What did you get wrong?, 4) What will you do next time?

Here are some examples of student self-marked progress checks and accompanying PC reflections from the same students (both from my Y11 physics class). Note: Photos of reflections are submitted via email and I use Gmail filters to auto-file these emails by class.

Brandon PC

Note how this student was made aware of consequential of follow through marks on question 1.

Here’s the PC reflection from this same student (based upon question 2).

B PC ref

Here’s another students’ self-marked Progress Check.

R PC

And the associated reflection.

Screen Shot 2017-04-11 at 7.18.54 am

Screen Shot 2017-04-11 at 7.19.47 am

Students are recognised and congratulated by the whole class if they get 100% on their progress checks, as well as one student from each class winning the ‘Best PC Reflection of the Week’ award. This allows me to project their reflection onto the board and point out what was good about it, highlighting an ideal example to the rest of the class, celebrating students’ successes, rewarding students for effort, and framing mistakes as learning opportunities.

I think that this process achieves my main two goals pretty well. Clearly these PCs form an integral learning opportunity, and in sum it only takes me about 7 minutes per class per week to enter PC marks into my gradebook.

Assessment Feedback on Mandated Assessment Tasks.

There are times when, as a teacher, we need to knuckle down and mark a bunch of work. For me this is the case on school assessed coursework (SACs), which contribute to my students’ end of year study scores. I was faced with the challenge of making feedback for such a test as beneficial to my students’ learning as the PC feedback process is, here’s what I worked out.

  1. On test day, students receive their test in a plastic sheet and unstapled.
  2. At the start of the test, students are told to put their name at the top of every sheet.
  3. At the end of the test I take all of the papers straight to the photocopier and, before marking, photocopy the unmarked papers.
  4. I mark the originals (Though the photocopying takes some time I think that in the end this process makes marking faster because, a) I can group all page 1s together (etc) and mark one page at a time (this is better for moderation too) and b) because I write minimal written feedback because I know what’s coming next…)
  5. In the next lesson I hand out students’ photocopied versions and I go through the solutions with the whole class. This means that students are still marking their own papers and still concentrating on all the answers.
  6. Once they’ve marked their own papers I hand them back their marked original (without a final mark on it, just totals at the bottom of each page), they identify any discrepancies between my marking and their marking, then we discuss and come to an agreement. This also prompts me to be more explicit about my marking scheme as I’m being held to account by the students.

In Closing

I’ve already asked students for feedback on the progress checks through whole class surveys. The consensus is that they really appreciate them and they like the modelling of the solutions and self-marking also. One good thing is that putting together this post prompted me to contact my students and ask for feedback on the self-marking process of their photocopied mandated assessment task. I’ll finish this post with a few comments that students said they’d be happy for me to share. It also provides some great feedback to me for next time .

I’d love any reflections that readers have on the efficacy of these processes and how they could potentially be improved.

From the keyboards of some of my students (3 males, 3 females, 5 from Y12, one from Y11).

Screen Shot 2017-04-12 at 9.09.34 am

Screen Shot 2017-04-19 at 9.23.09 amScreen Shot 2017-04-12 at 9.17.38 am Screen Shot 2017-04-12 at 9.06.27 amScreen Shot 2017-04-12 at 9.08.26 am

Screen Shot 2017-04-13 at 11.32.22 am

Edit:

A  fellow maths teacher from another school in Melbourne, Wendy, tried out this method with a couple of modifications. I thought that the modifications were really creative, and I think they offer another approach that could work really well. Here’s what Wendy said.

Hey Ollie,

I used your strategy today with photocopying students’ sacs and having them self correct. The kids responded so well!

Beyond them asking lots of questions and being highly engaged, those that I got feedback from were really positive saying how it made them look at their work more closely than they would if I just gave them an already corrected test, understood how the marking scheme worked (and seeing a perfect solution) and they liked that they could see why they got the mark they did and had ‘prewarning’ of their mark.

Thanks heaps for sharing the approach.
A couple of small changes I made were
  • I stapled the test originally then just cut the corner, copied them and then restapled. It was very quick and could be done after the test without having to put each test in a plastic pocket
  • I gave the students one copy of the solutions between two. Almost all kids scored above 50% and most around the 70% mark, and I didn’t want them to have to sit through solutions they already had.

if you have thoughts/comments on these changes I’d love to hear them.

Thanks again!

References

Find references to all theories cited (in brackets) here.

TOT010: Sensitively ‘correcting’ students’ english, the limits of ‘evidence based’ education, and more

Teacher Ollie’s Takeaways is a weekly post (and sometimes a podcast!) bringing together all of the fascinating things that Ollie read throughout the week! Find all past posts of Teacher Ollie’s Takeaways here

 Sensitive Instruction of English

Really enjoyed this episode of the Cult of Pedagogy Podcast. It can be a challenge to know how to correct the culturally-based idiosyncrasies in our students’ speech in a culturally-sensitive way. Jennifer Gonzales and Dena Simmons discuss how to do this with both respect and finesse. Well worth a listen!

Challenging the fallacious Fact/Value divide in Education Research

I’m naturally a very quantitatively driven guy. I seem to be drawn to numerical metrics of success, sometimes missing the forest for the trees. Something I’ve been exploring a lot recently is the assumptions underlying much educational research. Here’s just one of the blog posts that I’ve found stimulating in this space in the last few weeks. I’ll hopefully blog more about this in the not too distant future.


On the ‘Fact/Value’ false dichotomy. 

The one side asserts the importance of facts and thinks you cannot argue rationally using evidence  about values so excludes them from science, the other side asserts the importance of values and agrees that these cannot be put on a proper research footing so exclude themselves from science. But what if the claim introduced so casually by Hume nearly 300 years ago is simply wrong? What if we can derive values from an investigation of the facts?

And values are always present

Values enter into research when we select what to look at, when we decide how to look at it and when we interpret the meaning of what we think we see.(Standish, 2001). So values are always implicit behind ‘experimental designs’.

Double loop learning!

Double Loop Learning

My suggestion from this example is that what appears to many researchers as an unbridgeable divide between facts and values within educational research is perhaps better understood as the real difference in quality and temporality of these two intertwined research loops. On the one hand focused empirical research within a theoretical framework that embeds values and on the other hand the rather larger and rather longer dialogue of theory that can question and develop the assumptions of research frames. Both loops can be seen as united in a larger conception of science as collective ‘wissenshaft’ or shared knowledge. Both call upon evidence from experience and both make arguments that seek to persuade. While research findings from the smaller loops of empirical science based on narrow frameworks of assumptions can seem to progress faster and be more certain for a while than the findings of the larger borderless transdisciplinary shared enquiry of humanity this is an illusion because in fact the cogency of the assumptions behind narrow research programmes depend upon justifications that can only be made within the larger dialogue.

This boils down to the fact that we need to ask ourselves… ‘more efficient at what?’


And here’s another quote from a Schoenfeld article on the same topic!

Do Comprehensive Schools Reduce Social Mobility?

Just a paper that I thought some readers might like to check out on this subject!

Boliver, V., & Swift, A. (2011). Do comprehensive schools reduce social mobility? 1. The British journal of sociology, 62(1), 89-110

Maybe the source of PISA discrepancy is in large part due to paper vs. computer based implementation!?!

 

A Behaviour Management Guide for School Leaders

Google Drive Tools for Teachers

Addressing issues of cultural and political sensitivity in the classroom

This article is well worth a read! Here are some of my favourite quotes…

“When it feels more partisan, we walk more of a tightrope. For the ‘alt-right,’ I didn’t feel we had to walk a tightrope,” said Leslie, who viewed teaching about the alt-right as akin to teaching about the KKK. Racism ought to be a non-partisan subject, she said.

Learning about the alt-right, for example, is a lesson in political literacy. Teachers should not ask students to decide whether the alt-right is a good thing, but they can teach how it came about and how it has affected the political system, Hess said.

 

ERRR #004. Paul Weldon, Teacher Supply and Demand, and Out of Field Teaching

Listen to all past episodes of the ERRR podcast here.


Paul Weldon  is a Senior Research Fellow with the Australian Council for Educational Research. He works on multiple different educational research programs and is commonly involved in program evaluation and the design, delivery and analysis of surveys. Through his work on the Staff in Australia’s Schools (SiAS) surveys in 2010 and 2013, Paul developed a particular interest in the teacher workforce. He was the lead writer of the most recent Victorian Teacher Supply and Demand Report, and led the recent AEU Victoria workload survey.

In this episode we talk to Paul about his two papers, The Teacher workforce in Australia: Supply, demand and data issues and Out-of-field teaching in Australian secondary schools. This episode’s discussion includes an in-depth examination of the ’30% of teachers leave with in the first 3 years and 50% within the first 5’ that’s often quoted in relation to retention of early career teachers, the landscape of teacher supply and demand out to 2020, as well as what the distribution of out of field teaching in Australia says about how we value our out of field teachers.

Links mentioned in the podcast:

 

Australian Policy Online. ‘a research database and alert service providing free access to full text research reports and papers, statistics and other resources essential for public policy development and implementation in Australia, New Zealand and beyond.’. 

Striving to create an evidence informed student feedback form.

Seeing as my students have to endure my presence, instructions, and bad jokes for 3 hours and 45 minutes each week, I figure the least I can do is give them an opportunity to tell me how I can make this task a little easier for them. In my first year of teaching I knocked together the below form. I’ve used it for a year now and it’s been really helpful to date. In particular, it’s helped me to bring more celebration into my classroom, with many students over the past year indicating that they want their successes to be celebrated more (usually with lollies!). 
Screen Shot 2017-04-01 at 6.27.41 pm

This has been great, but as I’ve moved into my role as head of senior maths this year it’s prompted me to think more strategically about student feedback, and the role it can play in my own, and my team’s professional development.

No feedback form is going to tell a teacher, or a team leader, everything they need to know in terms of ‘Where am I going? How am I going? Where to next?’, but I’ve been feeling more and more as thought these forms do have a key role to play in helping teachers to spot gaps, and  motivating and inspiring us to improve our praxis.

I was really happy with the willingness of my team to roll out the above form (Obviously with ‘Ollie’ changed to their individual names) in their own classes, and the insights gained were very illuminating. But coupling these feedback forms with my own observations provided and even bigger insight for me. It surprised me just how differently student (novices when it comes to principles of instruction) and I (a relative expert) view what happens in a classroom.

From this it’s became more apparent to me that if I want student feedback to more effectively drive my own professional development, I need to start asking better and more targeted questions that will allow me to see exactly where my teaching is excelling, and where I’m falling short.

So, here’s a first draft of the new feedback questions (which I’ll eventually turn into a google form). I’ve based it off the Sutton Trust’s article What makes great teaching? Review of the underpinning research, headed up by Robert Coe. I’ve used the first four out of the six “common components suggested by research that teachers should consider when assessing teaching quality.” (p. 2). These are the components rated as having ‘strong’ or ‘moderate’ evidence of impact on student outcomes, and they’re also the components with observable outcomes in the classroom (5 and 6 are ‘Teacher Beliefs’ and ‘Professional Behaviours’, which encapsulate practices like reflecting on praxis and collaborating with colleagues).

For each of the following I’ll get students to rate the sentence from 1, strongly disagree, to 5, strongly agree, in the hope that this will give me a better idea of how students interpret the various components of my teaching and teacher disposition.

I’ll also add a question at the end along the lines of ‘Is there anything else you’d like to add?’.

I’ve numbered the Qs to make it easy for people to make comments about them on twitter. This is a working document and today is the second day of our 2 week Easter break. I’m keen to perfect this as much as possible prior to Term 2. Please have a read and I’d love your thoughts and feedback  : )

Ollie.

Link to Twitter discussion here.

Four (of the 6) components of great teaching (Coe et al., 2014).

Ollie’s Questions.

1. (Pedagogical) content knowledge (Strong evidence of impact on student outcomes)

The most effective teachers have deep knowledge of the subjects they teach, and when teachers’ knowledge falls below a certain level it is a significant impediment to students’ learning. As well as a strong understanding of the material being taught, teachers must also understand the ways students think about the content, be able to evaluate the thinking behind students’ own methods, and identify students’ common misconceptions.

1.1 Ollie has a deep understanding of the maths that he teaches you. He really ‘knows his stuff’.

 

1.2 Ollie has a good understanding of how students learn. He really ‘knows how to teach’.

 

 

2. Quality of instruction (Strong evidence of impact on student outcomes)

Includes elements such as effective questioning and use of assessment by teachers. Specific practices, like reviewing previous learning, providing model responses for students, giving adequate time for practice to embed skills securely

and progressively introducing new learning (scaffolding) are also elements of high quality instruction.

 

2.1 Ollie clearly communicates to students what they need to be able to do, and how to do it.

 

2.2 Ollie asks good questions of the class. His questions test our understanding and help us to better understand too.

 

2.3 Ollie gives us enough time to practice in class.

 

2.4 The different parts of Ollie’s lessons are clear. Students know what they should be doing at different times throughout Ollie’s lessons.

 

2.5 The way that Ollie assesses us helps both us and him to know where we’re at, what we do and don’t know, and what we need to work more on.

 

2.6 Ollie spends enough time revisiting previous content in class that we don’t forget it.

3. Classroom climate (Moderate evidence of impact on student outcomes)

Covers quality of interactions between teachers and students, and teacher expectations: the need to create a classroom that is constantly demanding more, but still recognising students’ self-worth. It also involves attributing student success to effort rather than ability and valuing resilience to failure (grit).

 

3.1 Students in Ollie’s class feel academically safe. That is, they don’t feel they’ll be ridiculed if they get something wrong.

 

3.2 Students in Ollie’s class feel socially safe. That is, Ollie promotes cooperation and support between students and he’ll step in if he thinks a student is being picked on by other students.

 

3.3 Ollie cares just as much about students doing their best and trying hard as he does about them being ‘smart’ or getting high results.

 

3.4 Ollie cares about every student in his class.

 

3.5 Ollie has high expectations of us and what we can achieve.

4. Classroom management (Moderate evidence of impact on student outcomes)

A teacher’s abilities to make efficient use of lesson time, to coordinate classroom resources and space, and to manage students’ behaviour with clear rules that are consistently enforced, are all relevant to maximising the learning that can take place. These environmental factors are necessary for good learning rather than its direct components.

 

4.1 Ollie manages the class’ behavior well so that we can maximize our time spent learning.

 

4.2 There are clear rules and consequences in Ollie’s class.

 

4.3 Ollie is consistent in applying his rules.

 

4.4 The rules and consequences in Ollie’s class are fair and reasonable, and they help to support our learning.

 

4.5 Students work hard in Ollie’s class.

 

 

TOT009:

Teacher Ollie’s Takeaways is a weekly post bringing together al of the fascinating things that Ollie read throughout the week! Find all past posts of Teacher Ollie’s Takeaways here

Astrophysicists and feminism

A great post, prompted by a meme shared for International Womens’ Day, on how young women aspiring to be astrophysicists is great, but os is little girls aspiring to be princesses…

What makes a good PD?

Turns out that almost all professional development for teachers fails, that is, it doesn’t have any measurable impact on student learning (great citations for this in this article). In the face of this, should we give up on PD all together? In this article @HfFletcherWood tells us some of the keys to good PD.

PISA and Technology in the Classroom

20 good youtube channels for Maths Teachers

The back and forth on explicit instruction

If you want to hear leaders in their field engaging in the constructivism vs. explicit instruction debate, the articles linked to in the comments of this article are a fantastic place to start. I’m working my way through them at the moment.

The performance of partially selective schools in England

Do partially selective schools improve results for students? Here’s a moderate scale study suggesting partially selective schools maybe don’t have such beneficial effects for those who attend…

Philosophy For Children. Effective or not?

Philosophy for Children is a program that aims to teach students how to think philosophically, and to improve oracy skills, and communication more broadly. Here’s a study attesting to its efficacy, see replies to this tweet for an alternative view…

The Mighty, A website highlighting the writing of Mighty People

Eloquent argument against the same old ‘new education’ assumptions

Tom Bennett argues agains a new film that rips on our educational system. Film states all the usual ‘stifles creativity’, ‘rote learning’, tropes. Great reply from Tom Bennett.

What to do when your child stares at another child with a disability?

Great post from Daniel Willingham. Hot tip, ensure it’s a social interaction. Follow the link for more.

Trump’s policies in perspective

Just because…

TOT008:

Find all other episodes of Teacher Ollie’s Takeaways here, find it on iTunes here, or on your favourite podcasting app by searching ‘Teacher Ollie’s Takeaways’. You may also like to check out Ollie’s other podcast, the Education Research Reading Room, here

Not a podcast this week, just a few notes on key takeaways : )

Seminal Papers in Educational Psychology.

Check them out!

Guide your teaching by setting questions that you want the students to be able to answer.

Birmo tweets about the new ‘My Induction’ app.

It’s pretty interesting, got some decent tips, and some good starting points for new teachers.

Collection of evidence on direct instruction.

This is gold! E.g., I knew I’d read somewhere in the last PISA report that inquiry learning was negatively associated with science outcomes, spent about 15 mins trying to re-find last week, then gave up. Low and behold, it’s right here!!!

Further dissecting Growth Mindset.

This has been a hot topic on Twitter recently. Here’s a collation of posts, well worth a look.

More evidence for Explicit Instruction in Maths

Effectiveness of Explicit and Constructivist Mathematics Instruction for Low-Achieving Students in the Netherlands

A must listen podcast!

I love the Mr. Barton Podcast, and this week was an absolute ripper. I can’t think of a better use of 2 hours of a teacher’s time than to listen to this!

How deep can a simple maths question take us?

A really simple maths questions, with some amazing results!

Here’s a sneak peek

Screen Shot 2017-03-09 at 8.55.05 pm

Screen Shot 2017-03-09 at 8.55.14 pm

Source: https://blogs.adelaide.edu.au/maths-learning/2016/04/12/quarter-the-cross/

Just for Fun. Pie Graphs in Action!!!

Want to see an elegant example of scaffolding?

How to help students to move from concrete examples to generalisations. This is a short and sweet classroom snapshot of how to do this incredibly effectively.

‘When will I ever use this?’: The ultimate comeback!!!

Thanks for joining me for another week with Teacher Ollie’s Takeaways : )

O.

TOT005: Why constructivism doesn’t work, evolution and cognition, the reliability of classroom observations, routines, and a classroom story

Find all other episodes of Teacher Ollie’s Takeaways here, find it on iTunes here, or on your favourite podcasting app by searching ‘Teacher Ollie’s Takeaways’. You may also like to check out Ollie’s other podcast, the Education Research Reading Room, here

Show Notes

Why minimal guidance during instruction doesn’t work

Ref: Kirschner, P. A., Sweller, J., & Clark, R. E. (2006). Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching. Educational Psychologist, 41(2), 75–86.

The arguments for and against minimally guided instruction

  • Assertion:

    The most recent version of instruction with minimal guidance comes from constructivism (e.g., Steffe & Gale, 1995), which appears to have been derived from observations that knowledge is constructed by learners and so (a) they need to have the opportunity to construct by being presented with goals and minimal information, and (b) learning is idiosyncratic and so a common instructional format or strategies are ineffective.

  • Response:

    “The constructivist description of learning is accurate, but the instructional consequences suggested by constructivists do not necessarily follow.”

Learners have to construct a mental schema of the information in the end, that’s what we’re trying to furnish them with, and it turns out, the less of a schema we give them (as with minimal guidance) the less complete of a schema they end up with. Essentially, if we give them the full picture, it will better help them to construct the full picture!

  • Assertion:

    Another consequence of attempts to implement constructivist theory is a shift of emphasis away from teaching a discipline as a body of knowledge toward an exclusive emphasis on learning a discipline by experiencing the processes and procedures of the discipline (Handelsman et. al., 2004; Hodson, 1988). This change in focus was accompanied by an assumption shared by many leading educators and discipline specialists that knowledge can best be learned or only learned through experience that is based primarily on the procedures of the discipline. This point of view led to a commitment by educators to extensive practical or project work, and the rejection of instruction based on the facts, laws, principles and theories that make up a discipline’s content accompanied by the use of discovery and inquiry methods of instruction.

  • Response:

    …it may be a fundamental error to assume that the pedagogic content of the learning experience is identical to the methods and processes (i.e., the epistemology) of the discipline being studied and a mistake to assume that instruction should exclusively focus on methods and processes. (see Shulman (1986; Shulman & Hutchings, 1999)).

This gets to the heart of the distinction between experts and novices. Experts and novices simply don’t learn the same way. They don’t have the same background knowledge at their disposal. By teaching novices in the way that experts should be taught we’re really doing them a disservice, overloading working memories, and simply being ineffective teachers.

Drilling down to the evidence:

None of the preceding arguments and theorizing would be important if there was a clear body of research using controlled experiments indicating that unguided or minimally guided instruction was more effective than guided instruction.. Mayer (2004) recently reviewed evidence from studies conducted from 1950 to the late 1980s comparing pure discovery learning, defined as unguided, problem-based instruction, with guided forms of instruction. He suggested that in each decade since the mid-1950s, when empirical studies provided solid evidence that the then popular unguided approach did not work, a similar approach popped up under a different name with the cycle then repeating itself. Each new set of advocates for unguided approaches seemed either unaware of or uninterested in previous evidence that unguided approaches had not been validated. This pattern produced discovery learning, which gave way to experiential learning, which gave way to probem-based and inquiry learning, which now gives way to constructivist instructional techniques. Mayer (2004) concluded that the “debate about discovery has been replayed many times in education but each time, the evidence has favored a guided approach to learning” (p. 18).

Current Research Supporting Direct Guidance

List is too long, here are some excerpts

Aulls (2002), who observed a number of teachers as they implemented constructivist activities…He described the “scaffolding” that the most effective teachers introduced when students failed to make learning progress in a discovery set- ting. He reported that the teacher whose students achieved all of their learning goals spent a great deal of time in instructional interactions with students.

Stronger evidence from well-designed, controlled experi- mental studies also supports direct instructional guidance (e.g., see Moreno, 2004; Tuovinen & Sweller, 1999).

Klahr and Nigam (2004) tested transfer following discovery learning, found that those relatively few students who learned via discovery ‘showed no signs of superior quality of learning’.

Re-visiting Sweller’s ‘Story of a Research Program. 

From last week: Goal free effect, worked example effect, split attention effect.

My post from this week on trying out the goal free effect in my classroom.

See full paper here.

David Geary provided the relevant theoretical constructs (Geary, 2012). He described two categories of knowledge: biologically primary knowledge that we have evolved to acquire and so learn effortlessly and unconsciously and biologically secondary knowledge that we need for cultural reasons. Examples of primary knowledge are learning to listen and speak a first language while virtually everything learned in educational institutions provides an example of secondary knowledge. We invented schools in order to provide biologically secondary knowledge. (pg. 11)

For many years our field had been faced with arguments along the following lines. Look at the ease with which people learn outside of class and the difficulty they have learning in class. They can accomplish objectively complex tasks such as learning to listen and speak, to recognise faces, or to interact with each other, with consummate ease. In contrast, look at how relatively difficult it is for students to learn to read and write, learn mathematics or learn any of the other subjects taught in class. The key, the argument went, was to make learning in class more similar to learning outside of class. If we made learning in class similar to learning outside of class, it would be just as natural and easy.

How might we model learning in class on learning outside of class? The argument was obvious. We should allow learners to discover knowledge for themselves without explicit teaching. We should not present information to learners – it was called “knowledge transmission” – because that is an unnatural, perhaps impossible, way of learning. We cannot transmit knowledge to learners because they have to construct it themselves. All we can do is organize the conditions that will facilitate knowledge construction and then leave it to students to construct their version of reality themselves. The argument was plausible and swept the education world.

The argument had one flaw. It was impossible to develop a body of empirical literature supporting it using properly constructed, randomized, controlled trials

The worked example effect demonstrated clearly that showing learners how to do something was far better than having them work it out themselves. Of course, with the advantage of hindsight provided by Geary’s distinction between biologically primary and secondary knowledge, it is obvious where the problem lies. The difference in ease of learning between class-based and non-class-based topics had nothing to do with differences in how they were taught and everything to do with differences in the nature of the topics.

If class-based topics really could be learned as easily as non-class-based topics, we would never have bothered including them in a curriculum since they would be learned perfectly well without ever being mentioned in educational institutions. If children are not explicitly taught to read and write in school, most of them will not learn to read and write. In contrast, they will learn to listen and speak without ever going to school.

Re-visit Heather Hill.

I asked: Dylan William quotes you and says ‘Heather Hill’s – http://hvrd.me/TtXcYh – work at Harvard suggested that a teacher would need to be observed teaching 5 different classes, with every observation made by made by 6 independent observers to reduce chance to really be able to reliable judge a teacher.’

Heather replied.

Thanks for your question about how many observations are necessary. It really depends upon the purpose for use.

1. If the use is teacher professional development. I wouldn’t worry too much about score reliability if the observations are used for informal/growth purposes. It’s much more valuable to have teachers and observers actually processing the instruction they are seeing, and then talking about it, than to be spending their time worrying about the “right” score for a lesson.

That principle is actually the basis for our own coaching program, which we built around our observation instrument (the MQI):

http://mqicoaching.cepr.harvard.edu

The goal is to have teachers learn the MQI (though any instrument would do), then analyze their own instruction vis-a-vis the MQI, and plan for improvement by using the upper MQI score points as targets. So for instance, if a teacher concludes that she is a “low” for student engagement, she then plans with her coach how to become a “mid” on this item. The coach serves as a therapist of sorts, giving teachers tools, cheering her on, and making sure she stays on course rather than telling the teacher exactly what to do. During this process, we’re not actually too concerned that either the teacher (or even coach) scores correctly; we do want folks to be noticing what we notice, however, about instruction. A granular distinction, but one that makes coaching much easier.

2. If the use is for formal evaluation. Here, score reliability matters much more, especially if there’s going to be consequential decisions made based on teacher scores. You don’t want to be wrong about promoting a teacher or selecting a coach based on excellent classroom instruction. For my own instrument, it originally looked like we needed 4 observations each scored by 2 raters (see a paper I wrote with Matt Kraft and Charalambos Charalambous in Educational Researcher) to get reliable scores. However, my colleague Andrew Ho and colleagues came up with the 6 observations/5 observer estimates from the Measures of Effective Teaching data:

http://k12education.gatesfoundation.org/wp-content/uploads/2015/12/MET_Reliability-of-Classroom-Observations_Research-Paper.pdf

And looking at our own reliabliity data from recent uses of the MQI, I tend to believe his estimate more than our own. I’d also add that better score reliability can probably be achieved if a “community of practice” is doing the scoring — folks who have taken the instrument and adapted it slightly to their own ideas and needs. It’s a bet that I have, but not one that I’ve tested (other than informally).

The actual MQI instrument itself and its training is here:

http://isites.harvard.edu/icb/icb.do?keyword=mqi_training

We’re always happy to answer questions, either about the instrument, scoring, or the coaching.

Best,
Heather

Routines.

Post from Gary Jones: Do you work in a ‘stupid’ school on functional stupidity and how smart people end up doing silly things that result in all sorts of bad outcomes, one of which is poor instruction for students.

Here are two of the 7 routines that the post highlighted for avoiding functional stupidity (originally from ALVESSON, M. & SPICER, A. 2016. The stupidity paradox: The power and pitfalls of functional stupidity at work.).

Newcomers find ways of taking advantage of the perspective of new members of staff and their ‘beginners mind.’  Ask them: What seems strange or confusing? What’s different? What could be done differently?

Pre-mortems – work out why a project ‘failed’ before you even start the project.  See for http://evidencebasededucationalleadership.blogspot.com/2016/11/the-school-research-lead-premortems-and.html more details

 

From the classroom…