Tag Archives: metacognition

The battle for deliberate practice

It’s exam time and your students are preparing. You’re going around the class, observing how students are studying and, shock horror, they are re-reading and re-writing their notes. The notes are literally going from one notebook to another notebook without going through their brains in-between. As a teacher this is one of the most frustrating things for me to see, and recently I’ve been on a bit of a war path to try to stop it.

This is a short post to celebrate some of my students doing deliberate practice. This year I’ve been stressing the importance of students re-doing questions (as opposed to just re-reading them).

Screen Shot 2017-06-05 at 9.54.23 amPictured right is how Ericsson describes deliberate practice (pg. 367)

 

They way that I’ve advocated for this is to ask students to 1. Identify questions that they got incorrect in our weekly tests, 2. Get a book or another sheet of paper and cover the answer, 3. Re-do the question, 4. Slide the book/piece of paper down and check, 5. Re-do again if they got it wrong, 6. Re-do again a few hours/days later to consolidate.

I’ve felt like a bit of a broken record but then, today, I had my day made when walking around the class I saw these two students!

IMG_20170605_093743

IMG_20170605_093703IMG_20170605_093710

To attend to the motivational segment of the task, I knocked up this sheet that I gave to students at the start of today’s revision session.

Deliberate practice for the win. Just wanted to celebrate. Hopefully it pays off in their exam.

‘The Points System’: One approach to differentiation

When I was in school I remember that I found nothing more boring than the instruction to ‘do questions number 1 to 5, parts a) to h)’. I would get my head around the concept by question b or c of each section, then have to spend what seemed like an eternity working through a whole bunch of exactly the same question with different numbers. With this knowledge of my own experience in mind, I’ve always known that I wanted to try to avoid this same boredom for my own students.

This approach to differentiation and boring repetition reduction was brought to my attention by one of my Masters of Teaching lecturers Nicky Dulfer, who said that one of her friends had implemented an ‘earn points’ system in their own mathematics class. I thought it sounded like a nice idea. Here’s how I approached it…

The slide I showed to students: 

For those unfamiliar with the proficiency strands of fluency, understanding, reasoning and problem solving (‘open-ended’ in this case), here’s an example of a few example Q’s for context (from the Pearson Mathematics 9 Textbook).

 

Did students like it?

This was implemented with a year 9 mathematics class.

Mid placement feedback: In response to the question ‘What should Ollie keep on doing?’ five students nominated the points system. In response to the question ‘What should Ollie stop doing or modify?’ no students nominated the point system. (Highest number of votes for any one category was 8, which was to suggest that I continue making videos of content).

End of placement feedback: In response to the question ‘What did Ollie do that most helped you learn?’, four  students nominated the points system. In response to the question ‘What are some things Ollie shouldn’t do in future, or things to modify’, one student said ‘Point system (15 in 30 minutes was too rushed and stressful)’. (Highest number of votes for any one category was 12, which was to suggest that micro-revisions (post on micro-revisions to come…) were what most helped students to learn).

So, on the whole, the points system was well received by students.

Did it help the students to learn?

source: http://store.discovery.com/img/product/catl/00275198-962183.jpg

Unfortunately I didn’t conduct this points system long enough to be able to tell if it increased student learning. But I can say that it appeared to increase engagement, and as ‘a major precursor to learning is engagement’ (Hattie, 2012, Chapter 8, Section 2, para. 1), it’s plausible that it increased learning.

I do however think that it’s fair to say that this task helped the higher achieving students to learn more. After they had gained their 15 points (some of them would finish this in under 10 minutes), students were able to move onto ‘challenge questions’, like the pythagoras challenge questions that I’ve written about previously. The traditional approach for these students was to get them to do a set of questions from each of the proficiency strands (fluency, understanding… etc), which would take up the whole lesson, and was pointless in many cases as these high achieving students could easily complete the task and weren’t being challenged at all.

The idea of students self-differentiating was also intended to help promote metacognition. Fostering metacognition is a key step to helping students to ‘become their own teachers, which is the core attribute of lifelong learning or self-regulation, and of the love of learning’ (Hattie, 2012, loc 168). But actually scaffolding this metacognition is something that I need to do better in this task and in the classroom more generally. I feel that it’s unreasonable to assume, as I implicitly did, that a year 9 student will be able to select a question at the appropriate level without any help.

Conclusion:

I feel that this task was a step in the right direction and, coupled with the power of spacing repetition of content for students (Carpenter, Cepeda, Rohrer et al., 2012), has great potential to be expanded and improved upon in future. One immediate improvement to make would be to have ‘challenge questions’ (for students who work through the 15 points) of different levels of difficulty, rather than just the one ‘challenge question’ that wasn’t so easily accessible to some students.

Note: Sunshine College has taken a similar approach where students select for themselves a ‘just right’ task and form small groups to solve it. This is an approach that I’ll be exploring more in the near future. You can read a brief paper about it here.

References:

Hattie, J. (2012). Visible learning for teachers. [Kindle version]. Retrieved from Amazon.com.au

Carpenter, S. K., Cepeda , N. J., Rohrer, D., Kang, S. H., & Pashler, H. (2012). Using Spacing to Enhance Diverse Forms of Learning: Review of Recent Research and Implications for Instruction. Education Psychology Review , (24), 369-378.

Tying Together Backwards Design, Self Marking and Criterion Referencing for effective teaching

You may have heard of the concept of Backwards Design before. Often associated with Grant Wiggins, the concept basically states ‘Work out what you want your students to know, then design your lessons with that end in mind.’ Pretty simple, and self explanatory. But the question I had on my recent placement was ‘How can I do this practically, and how can I effectively guide students through the learning process and offer useful feedback along the way?’

My ‘design brief’ (as specified by the Maths department) was to use the year 9  Australian Curriculum mathematics textbook and, over 4 weeks, cover all of the content in the chapter on Linear Equations and Algebra. But I was keen to make this more explicit for students.

Backwards Design

I had been inspired by Sarah Hagan’s work on ‘I Can’ sheets so wanted to tie in these ‘I Can’ sheets with the idea of backwards design.

I surveyed the chapter, looking for a good way to organise this information, and I found that each of the worked examples was teaching a unique competency (or an important but incremental advancement of a pre-taught competency), and they were well labelled. Here’s an example:Screen Shot 2015-06-22 at 8.17.35 am

(Source, textbook)

So I took these competencies and constructed an ‘I can’ sheet that I distributed to students on day 1 of my placement. The ‘I can’ sentence from the above example was ‘I can substitute values into expressions and evaluate’.

In the above ‘I can’ sheet the ‘My pg.’ column was designed as a place for students to write the page number in their book that corresponds to that ‘I can’ statement. For example, if they stuck these two sheets into their book and began numbering straight after these two sheets then the first lesson, covering example 1 (E1) would be on page 1, so they would write a ‘1’ in the My pg. column. The space ‘Key points’ was for the students to write their own key points at the end of every lesson. The Tinycc link took students to videos of the content that I made (I hope to upload these videos to youtube in the near future).

So that was the backwards design, now it was time to tie it into assessment.

I put together a pre-test for the unit. This pre-test assessed students from E1 to E11, inclusive*. And each of the questions was numbered to do so (as can be seen below). This test was administered by my mentor prior to my actual placement and I was able to pick the tests up and mark them to get an idea of where each of the students was at even before I entered my teaching role. I didn’t return these tests to students and told them explicitly ‘The pre-test was about me working out where you guys are at the moment and helps me identify misconceptions so I can directly address them when I introduce the content. Your mark doesn’t matter, what matters is what the test told me about how each of you are currently thinking about this maths’.

Now, the goal of the backwards design was to show students where we were going in terms of content, and give them a clear roadmap of the stops along the way. The goal of assessment was to help me (pre-test) and them (mid-unit test) to see how they were tracking along this path.
My placement was 4 weeks long and at the 2 week mark students were given a mid-unit test. Little did they know that this mid-unit test was EXACTLY THE SAME as the pre-unit test! (one of the benefits of not returning their pre-test ; ) so it gave me a perfect picture of how each of them had progressed.

But, as mentioned, I was very keen to help the students become evaluators of their own learning. To do this, I got them to self-mark. Here’s the process…

Criterion Referencing and Self Marking

I printed out the mid-unit tests (which I referred to as the ‘mid-unit checkup’) double sided on an A4 sheet (one sheet). What this allowed me to do was give students the sheets then collect them up and run them as a batch through the photocopier, giving me both their original and a photocopied version. I then marked the photocopied version so that I had the marks myself and, in the following lesson, I handed back the unmarked original to students, stapled together to two other sheets.

The first sheet stapled behind the original mid-unit test was a set of clearly worked solutions, the second sheet was the following…

For each of the students I filled out the ‘Based on pre-test’ column with either a tick, arrow, or cross (feel free to read instructions on the above sheet so that this makes sense), I then encouraged students to fill out the ‘Based on mid-unit check’ column in the same fashion.
Screen Shot 2015-06-24 at 8.19.06 am
This did a couple of things. Firstly, it enable students to link the competencies to questions, reinforcing what metalanguage such as ‘common factors’ and ‘collecting like terms’ meant. Secondly, it showed them the areas that they still needed to work on. Equally importantly, it showed students how they had progressed, they were able to say things like ‘Great, on the pre-test I couldn’t simply by by collecting like terms, but now I can!’, this was a real plus.
The thing that I really appreciated was the ability to generate a graph of class average results from the pre and mid-unit tests. I then showed this to the class.
Screen Shot 2015-06-25 at 8.04.33 am

This graph really excited students. Students were able to celebrate the progress that they had collectively made, and to identify what they still needed to work on. Students even noticed that for E1, which was ‘write algebraic expressions for word problems’ the class had actually gone backwards! This prompted conversation and allowed us to talk about how this was the first thing we covered in the unit, and provided an opportunity to to re-visit the concept of the forgetting curve that I’d introduced them to earlier.

Reflections

I was really happy with how this approach came off. I saw marked increases in engagement from students. Most importantly, they totally ‘got it’. Student feedback alluded to the fact that students gained a greater understanding of where they were, and what they needed to work on. Here is what some of the students had to say about it (from a feedback form that I handed out on my final day of placement).

Next Time…

Next time I would like to improve upon this method by keeping the students’ ‘I Can…’ sheets all in one place, and preferably in a digital form, so that they can’t get lost and both students and I can access them from home, I’m thinking google spreadsheets for this but I’ll continue to consider options. This would allow them to take greater charge and track their own progress through the formative, mid-unit, and summative assessments.

I’m happy with how this approach went and I look forward to refining this approach when I’m next in the classroom.

The image below shows the class distribution of score in the pre-and post tests. I calculated my ‘effect size’ based on this information and was very pleased with the result : )

First Placement Effect Size

*I now think that it would be worth considering giving students a pre-test that contained ALL of the content from the unit (i.e., Example 1 to Example 21). This is because 1: It would have given them a better idea of the answer to the question ‘where are we going’, 2: it would have given some of the students opportunities to problem solve and try to work out for themselves how to do it and 3: I recently attended a lecture by Patrick Griffin in which he talked about how students shouldn’t EVER be getting 100% on tests because that doesn’t actually give you accurate information on where they are up to, it only tells you what level of achievement they are can perform higher than! Some students did get very close to 100% on this pre-test. But it is important to acknowledge that for these students I hadn’t had the time to build the culture of ‘have a go’ so some were even reluctant to attempt the pre-test, not quite understanding why they should be tested on something they haven’t even been taught yet. Making the pre-test longer and harder by including all content from the unit to be taught could have been overwhelming for some students.

One approach to introducing cold calling to the classroom

After reading Glen Pearsall’s Top 10 Strategic Questions for Teachers (concise, free ebook) recently I was really inspired to introduce cold calling into the classroom. I had also wanted to incorporate spaced repetition into my class as well, so I thought it would be a good opportunity to kill two birds with one stone. (note: I’ve just started in my teacher training placement school, I’ve taught 3 lessons there in previous weeks but today was the first day of a month taking two year 9 math classes full time).

The way I thought I’d implement cold calling, to use pop sticks with the student’s names on them, was inspired by watching Dylan William’s ‘The Classroom Experiment‘, but I was afraid of making kids feel too put on the spot by the questioning. How to scaffold their participation?

The first student question I wanted to pre-empt was, ‘Why are we doing this spaced repetition stuff?’ (I didn’t actually use that term, I’ve called it ‘Micro-revision’). I addressed that by showing them the following clip.

I asked students what they’d learned from the clip, and if anything surprised them. I then mentioned that at the start of every class from now on we’ll be doing ‘micro revision’, 3 questions to jog our memory on previous topics.

Then, to address fear of making mistakes and a feeling of being put on the spot, I showed this clip from Jo Boaler.

I then said, ‘remember those coloured pop sticks you wrote your names on the other day?’ (Last Friday I had asked them to write their name on a coloured pop stick and used them as exit cards). ‘Well, for each question on the micro-revisions, I’m going to be using the sticks to select someone to share their thinking with the class.’ I then flashed up the following slide.Screen Shot 2015-04-20 at 6.11.05 pm

Then we got stuck in, and I revealed the first 3 micro-revision question (I also added in a challenge question in-case any of the students were streaming ahead. I stated that I wouldn’t be going through that one with the whole class). I watched the class and walked around to gauge when most of them had done about as much as they were going to.

I had a few questions planned, 1: What is this question asking us to do? What does it relate to that we’ve learned before? (More on metacognitive questioning and practices here), 2: Why did you do that?, i.e., just basically get them to justify their mathematical thinking. I also encouraged them to use correct terminology. Then… the moment of truth!

Screen Shot 2015-04-20 at 6.25.21 pmThe first student I asked said “I didn’t do it”. I replied “That’s ok, if you were going to do it now, how would you do it?”, he went right ahead and solved it in front of the class (him talking, me writing on the board). The second student was really happy and excited to share as well, and the third student even used the terms ‘numerator’ and ‘denominator’ in their answer which blew me away! I’ll also note that I was careful not to say ‘What’s the answer?’ but instead used the two questions outline above along with ‘Can you share with us how you approached this question?’. The emphasis was on sharing approaches, not on answers.

In closing, I was really happy with how the whole exercise came off, and I look forward to keeping up with the micro-revisions. I’ve also just had the thought of making Fridays ‘Feedback Fridays’, where I’ll give them five questions on the micro-revision and ask them all to hand in their answers so I can get a more clear picture of how each student is doing (That’s ‘Formative Feedback Fridays’ in teacher lingo ; ), and to write on the back of the sheet any feedback that they have on my teaching.

 

Metacognition: Can it help students problem solve?

I was recently doing a little reading into metacognition and began to wonder if it could be used as a tool in the Maths classroom to help students, particularly with problems solving. I got hunting in the research and found the following paper.

Mevarech, Z. R., & Kramarski, B. (1997). IMPROVE: A Multidimensional Method for Teaching Mathematics in Heterogeneous Classrooms. American Educational Research Journal, (2). 365.

I got a lot out of it, and thought some others might like to hear how the IMPROVE method works.

This study was done in the late 90’s in Israel. It was to test a modified teaching model based on the incorporation of three elements that aren’t always seen in the classroom

  • Metacognitive Training
  • cooperative learning
  • systematic provision of feedback-corrective-enrichment.

I’ll expand on each of these a little below, then talk about the results of using them in tandem (as was done by Mevarech and Kramarski).

Metacognitive Training

Screen Shot 2015-03-07 at 12.00.43 pm

http://upload.wikimedia.org/wikipedia/commons/d/d2/A_picture_is_worth_a_thousand_words.jpg

I like to think bout metacognition as stepping back from a situation and asking ‘how is my brain reacting to the stimulus here? And how would I like it to react?’. The IMPROVE method got students to begin to do this by introducing three new questions to the mathematics classroom. These three questions were made into cards and passed around when problems got challenging. The questions were

Comprehension Questions: What’s the problem actually saying. Students were asked to ‘read the problem aloud, describe concepts in their own words, and discuss what the concepts meant or into which category the problem could be classified’ (p. 374)

Connection Questions: “How does this question relate to things that you’ve seen before?”

Strategic Questions: All about how you’re going to attack a problem. Ask “What strategy/tactic/principle can be used in order to solve this problem?”, “Why” and “How will you carry this out?” (p. 376)

The idea of these questions was to help students to differentiate between equivalent problems (Qs with the same structure and ‘story context’), similar problems (different structure but same ‘story complex), isomorphic problems (same structure, different ‘story context’), and unrelated problems (very little in common). These categories are from Weaver and Kintsch (1992) and the terminology wasn’t taught to students.

Cooperative Learning

The method used followed Brown and Palincsar’s method (1989) in which students were put into teams of 4 students of 1 high, 2 middle and 1 low achieving student (p. 377). As students progressed teams were changed to maintain this structure. It was stated at this point in the paper that the question-answering technique based on that of Marx and Walsh (1988) was used following a brief teacher exposition of approximately 5 minutes. I plan to further explore this mentioned questioning technique.

Feedback-Corrective-Enrichment

At the end of each 10 or so lessons (constituting a unit) students took a formative test to check their comprehension of the unit’s main ideas. Tests were based on the work of Bloom (1976). Students who didn’t achieve ‘Mastery’ (taken as 80% correct) were given extra support to solidify the basics, students who did went on to enrichment activities. Essentially a form of differentiation.

The Studies

The paper detailed 2 studies that were undertaken. The first included 247 year 7 students split into an experimental (n=99) and control (n=148)  group and the second study consisted of 265 students (experimental n=164, control n=101)). The first study was completed in a region of heterogeneous classrooms (ie: students weren’t split into classes based on ability, ‘tracked’. These classes spanned more than 5 ability years) whilst the second was undertaken in a district where ‘tracking’ was the norm. The second study was undertaken in order to see if the IMPROVE method applied for an entire year would yield encouraging results as it did over the shorter period as in Study 1, as well as to expand the topics to which the method was applied.

Study 1 applied IMPROVE to the topics of rational numbers, identification of rational numbers on the number axis, operations with rational numbers, order of operations, and the basic laws of mathematics operations.

Study 2 applied IMPROVE to the topics of numerals and rational numbers, variables and algebraic expressions, substitutions in algebraic expressions, linear equations with one variable, converting words into symbols, and using equations to solve problems of different kinds.

Tests were composed of computational questions (25 items) and reasoning questions with no computational requirements (11 items).  The reasoning questions were marked based on the following progressive marking scheme: 1 point: Justification by use of 1 specific example, 2 points: reference to a mathematical law but imprecise, 3 points: references mathematical law correctly but conflict incompletely resolved, 4 points: question completely resolved with correct reference to relevant mathematical laws.

Based on pre-intervention test scores students were classified as low, middle or high achieving with pre and post test results compared within these groups.

Results

Screen Shot 2015-03-07 at 12.03.02 pm

p. 381

Study 1: No difference existed between control and experimental group prior to the intervention but IMPROVE students significantly outperformed those in the control post-intervention. Overall mean scores were 68.03 (control) vs. 74.72 (treatment) post-intervention (p<0.05) with means scores on the reasoning component 53.15 (control) vs. 62.56 (treatment). Improvements were seen at all achievement levels.

p. 384

p. 384

Study 2: As with study 1 mean scores for the experimental group increase significantly more than those in the control group with 2 important points to note. Firstly, only the gains to the high achievers group were statistically significant, with the medium achievers group being milldly significant (p-0.052). Low achieving treatment scores>Low achieving control scores in all cases but this result wasn’t statistically significant. Secondly, these trends held for all topics except for ‘operations with algebraic expressions’. It was suggested this was due to the fact that this unit required more practice than other units, thus, being a more of a procedural topic the benefits of metacognitive approaches weren’t as impactful.

Discussion

It’s clear that the IMPROVE intervention aided in student achievement. It increased their ability to draw on prior knowledge to solve problems and to establish mathematical mental schemata to increase their ease of access to this prior knowledge. One challenges with this study (as outlined by Mevarech and Kramarski themselves) was that the three elements; metacognitive training, co-operative learning, and feedback-corrective-enrichment were all applied simultaneously making it impossible to distinguish which of these was contributing by how much to the observed effects. Another question surrounds how this method appeared to facilitate gains to students proportional to their ability starting point, with higher achievers improving relatively more than middle and low achievers.

The authors suggest the program was successful in the following ways. it:

  • made it necessary for participants to use formal mathematical language accurately
  • made students aware that problems can be solved in multiple ways
  • encouraged students to see different aspects of problems.
  • gave high achievers opportunities to articulate and further develop their thinking processes at the same time as letting lower achievers see these thinking processes modelled

Post-intervention the IMPROVE method was implemented in all classes of all schools that the trials were performed in. 

Notes: I initially found this article via the article:  Schneider, W., & Artelt, C. (2010). Metacognition and mathematics education. Zdm, 42(2), 149-161. doi:10.1007/s11858-010-0240-2 . Schneider and Artelt’s article also outlined various other metacognitive training strategies that have been trialled in different classrooms. I chose to focus on Mevarech and Kramarski’s IMPROVE model here as it was a rigorous study and was cross referenced in several other papers also.

IMPROVE is an acronym for: Introducing new concepts, Metacognitive questioning, Practicing, Reviewing and reducing difficulties, Obtaining mastery, Verification, and Enrichment.

References:

Brown, A., & Palincsar, A. (1989). Guided cooperative learning: An individual knowledge acquisition. In L. Resnick (Ed.), Knowing, learning, and instruction: Essays in honor of Robert Glaser (pp. 393-451). Hillsdale, NJ: Erlbaum.

Marx, R. W., & Walsh, J. (1988). Learning from academic tasks. The ElementarySchool Journal, 88, 207-219.

Mevarech, Z. R., & Kramarski, B. (1997). IMPROVE: A Multidimensional Method for Teaching Mathematics in Heterogeneous Classrooms. American Educational Research Journal, (2). 365.

Weaver, C. A., III,& Kintsch,W. (1992). Enhancing students’ comprehension of the conceptual structure of algebra word problems. Journal of Educational Psychology,84, 419-428.