Constructivist Math Sucks

Print
Category: Constructivism
Published on Thursday, 24 May 2012 Written by Anonymous Teacher

320px-CowPie-JeffVanugaThere are a lot of 'new' maths out there that promise to increase math understanding in students by using constructivism. The idea is that by having kids figure things out for themselves, and do a lot of 'critical' thinking, they will come to a better understanding of math. There's just one problem: it doesn't work well.

Constructivism sucks

Here's an awesome blog post about one of these new math instruction courses called Everyday Math.

I haven't come across everyday math before, and it's official website isn't helpful in detailing what it is exactly, but you can be sure it's similar to the 'new math' that left students unable to multiply numbers in their head, but having heard of the commutative property (though they don't remember what it is).

The glacially moving What Works Clearinghouse issued an intervention report on the execrable Everyday Math.

Everyday Math is one of those constructivist "problem solving" based curricula that progressive educators like so much. It's currently in use in about 20% of all elementary schools.

It also doesn't work, as WWC has finally determined.

A preliminary note. The iron-clad D-ed Reckoning rule of education research has been verified yet again: ninety percent of all Ed research sucks.

In Everyday Math's case there were 61 "research" studies. 57 did not meet WWC's evidentiary standards. That means that 93% of the Everyday Math research sucked.

None of the research fully met the evidentiary standards. Only four of the studies met the evidentiary standards with reservations. These were quasi-experimental studies. Here are the results of those four quasi-experimental studies:

The Carroll (1998) study included 76 fifth-grade students in four classrooms from four school districts using Everyday Mathematics and a comparison group of 91 fifth-grade students in four classrooms from similar districts, matched on student demographics and geographical location. The intervention group had used Everyday Mathematicssince kindergarten. The comparison group had used traditional basal mathematics texts at all previous grades.

The Carroll (1998) study reported a statistically significant positive effect of Everyday Mathematics on geometric knowledge. After accounting for pretest differences between Everyday Mathematicsstudents and comparison students, the WWC determined that this finding was substantively important but not statistically significant. Based on this study finding, the WWC categorized the effect of Everyday Mathematics on geometric knowledge as being a substantively important positive effect

So the results from the first quasi-experimental study were not statistically significant. That's all you need to know. Let's move on.

The Riordan and Noyce (2001) study included 3,781 fourth-grade students in 67 schools in Massachusetts using Everyday Mathematicsand a comparison group of 5,102 fourth-grade students in 78 similar schools, matched on baseline mathematics achievement scores and student demographics. Forty-eight schools in the intervention group had implemented Everyday Mathematics for four or more years (early implementers), and 19 schools had implemented Everyday Mathematics for two or three years (later implementers). The comparison group used 15 different textbook programs representing the instructional norm in Massachusetts, with the most commonly used programs being those published by Addison-Wesley, Houghton-Mifflin, and Scott-Foresman.

The Riordan and Noyce (2001) study reported a statistically significant positive effect of Everyday Mathematics on overall math achievement. Using school-level data provided by the authors, the WWC determined that this finding was statistically significant and substantively important for the 48 early-implementing schools. For the 19 later-implementing schools, however, the WWC determined the finding to be substantively important but not statistically significant.Based on this study finding, the WWC categorized Everyday Mathematics as having a statistically significant positive effect on overall math achievement for the 48 early-implementing schools and a substantively important positive effect for the 19 later-implementing schools.

This study was funded by the Noyce Foundation, one of the authors of the study, which also has a financial stake in Every Day math. So we have a quasi-experimental study that was conducted by a potentially biased researcherDavid Klein also noted the following defects in the study:

One of several shortcomings of [the Riordan/Noyce study] is that the schools studied are not identified. That makes it impossible to verify the results independently, thereby raising the possibility of fraud. This is a realistic possibility as the Noyce Foundation (headed by one of the authors of the study) has invested a lot of money in CMP, one of the programs found successful by the study. Clearly, that author has an interest in good results for the schools using the program she endorses. The editors of the journal should have asked for an independent confirmation of the methodology used to select both sets of schools and how they were matched before publishing the article. Instead, we get another example of "advocacy research." The comparison schools are constructed in a questionable way. The authors mix up all kinds of textbooks in the comparison groups--for half of which the authors report no curriculum program at all in the published article. No follow-up studies have ever appeared showing whether these schools maintained improvement and continued to improve in subsequent years of MCAS (2000, 2001, 2002), which would be easy to do since there were only about 20 or so schools in the experimental group.

The most telling evidence against the Riordan/Noyce study, however, is the fact that despite growing use of NCTM endorsed math programs (financed by millions of dollars from the NSF), percentages of kids in the top two categories on grade 4 and grade 8 on MCAS have been stable since 1998. For 5 years, there has been no discernable increase in the percent of kids moving into the two top categories, based on a test that matches the NCTM reform agenda.

Let's move on.

The Waite (2000) study included 732 third-, fourth-, and fifth-grade students in six schools using Everyday Mathematics and a comparison group of 2,704 third-, fourth-, and fifth-grade students in 12 similar schools, matched on baseline math achievement scores, student demographics, and geographical location. The schools in the intervention group were in their first year of implementing Everyday Mathematics. The comparison group used a more traditional mathematics curriculum approved by the school district.

The Waite (2001) study reported a statistically significant positive effect of Everyday Mathematics on overall math achievement. After accounting for the misalignment between the school as the unit of assignment and the student as the unit of analysis, the WWC determined that this finding was substantively important but not statistically significant. Based on this study finding, the WWC categorized the effect of Everyday Mathematics on overall math achievement as being a substantively important positive effect. The Waite study reported subtest results (concepts, operations, and problem solving). After WWC calculations, these results were found to be positive but not statistically significant. The subtest analyses do not factor into the rating.

Another statistically insignificant result. Let's move on.

The Woodward and Baxter (1997) study included 104 third-grade students in five classrooms in two schools using Everyday Mathematics and a comparison group of 101 third-grade students in four classrooms in one similar school, matched on student demographics and geographical location. The comparison group used the Heath Mathematics curriculum, a more traditional mathematics program.

The Woodward and Baxter (1997) study reported no significant effect of Everyday Mathematics on overall math achievement. After accounting for pretest differences between Everyday Mathematicsstudents and comparison students, the WWC confirmed this finding. Based on this study finding, the WWC categorized the effect ofEveryday Mathematics on overall math achievement as indeterminate. The study also reported subtest results (computation, concepts, and problem solving) and found a statistically significant positive effect on the concepts subtest. WWC calculations revealed a substantively important, but not statistically significant, positive effect for the concepts subtest and a substantively important, but not statistically significant, negative effect for the computations subtest. The subtest analyses do not factor into the rating.

So this small study had indeterminate results with statistically insignificant subtest results.

The WWC generously conclusion:

"The WWC found Everyday Mathematics to have potentially positive effects on mathematics achievement"
Source: http://d-edreckoning.blogspot.com/2006/09/its-official-everyday-math-sucks.html

 

Everyday Math claims that it's research-based and field-tested. This implies that research shows it to be good. As you can see above, that claim is far from true.

Yes, research shows that it works, but no better than any other crappy constructivist math program.

Let me clear up a few terms for those of you who haven't had statistics in a while.

Statistical significance means that the difference between Everyday math and whatever other program they used to compare it with was probably not due to random chance. It would imply that everyday math is actually better, and that higher scores aren't due to the random variation of test scores that you would expect.

The studies above show that there was no statistical significance to the findings. This means that as far as we know, the results were probably due to random chance alone. In other words, Everyday math had no positive effect.

When they say the effect was substantively important, what they are really doing is trying to get away with some statistical bull shit.

Once we discover that there is no statistical significance, we know that there is no effect! The discussion should end there.

But no, they have to drag in substantive significance (importance) to try to rescue Everyday Math.

Substantive significance means that the difference found in a study is large enough not to be trivial. Let's say you compared a study where people played the lottery once a year vs twice a year. Playing the lottery twice a year should give you a higher chance of winning the lottery, but the increased chance is so small that it's not worth it. It's substantively insignificant.

What the studies above are telling us is that Everyday math has no effect. We know this because there's no statistical significance. But if there were an effect, it would be large enough to be substantively significant.

Can you smell the bullshit?

A nice, substantial difference between two groups means nothing if that difference isn't real. It's like a picture of a sandwich: it looks like it would be a good meal (substantive significance), but it's not real; it's a freakin' illusion (statistical insignificance)!

Without statistical significance, substantive significance means nothing!

If the dragon doesn't exist, it doesn't matter how big it's wings are!

So how good is Everyday math? It's no better than any of the other programs it was tested against, but let's just ignore the facts and follow our dreams.... typical constructivism.

Comments  

 
Guy
0 # Guy 2013-11-02 04:25
Nice. ;-)
Reply | Reply with quote | Quote
 

Add comment


Security code
Refresh

Tuesday the 22nd. Copyright 2012, Virtue Academy
Copyright 2012

©