Educating Potential

Evidence-based decision-making; or decision-based evidence-making?

I was interviewing a school network leader when we began talking about a funder he had met recently. He was asking the funder to invest in the school model’s expansion and had been turned down, not because the school was not getting good results, but because the funder said he did not believe there was enough of a research base to support the model itself.  During that conversation he used the phrase “decision-based evidence-making.” I loved it.

I was trained in social science research.  I am familiar with the range of research that exists in the educational space.  I know that education research is often disregarded because it is seen as less rigorous than what people perceive to be the gold standard of randomized, controlled studies in science. (Though a few recent articles about drug trails bring into question just how good that research is as well).  There is a lot of research out there across a number of different disciplines which, if used to shape educational policies, would undoubtedly lead to dramatic improvements in a range of student outcomes across all student populations but among underperforming students in particular.  Policies based on this research would include:

  • Reducing by 50% the amount of sodium and sugar being consumed by students.  The relationship between high-quality food choices and behavior, academic performance and health are well-documented.
  • Make sure that all students have at least 15 minutes of unstructured recess/playtime or classroom-based movement for every hour of class time.
  • Make sure that all students have a minimum of 8-hours of sleep (hard to do through policy, but certainly a possible grant-funded education campaign)
  • Change school start and end times to match students’ natural biological patterns (in other words, have schools – especially middle and high schools – start later in the day when adolescents are awake and ready to learn.)

I am fairly certain that there is more evidence to support these changes to policy than there were (and are) to support at least two of the current improvement policies we have put into place: educator evaluations and performance-based compensation systems.  Most of the current research on the last three years of efforts is mixed on both.  There was certainly very little evidence to support the reform strategies at the time they were promoted and adopted en mass through programs like Race to the top.

It turns out, however, that the topic of food in schools is a politically-charged one, which raises the ire of (among others) numerous factions within the food lobby, conservatives who don’t want money spent on social welfare programs, libertarians who want government out of kids’ lives altogether, and liberals who disagree on what the specifics of such policies would be.  Recess and a range of other activities in many schools, especially those serving low-income kids of color, have been deemed to take away from instructional time in “core” areas.  Changing school start and end times would wreak havoc on the lives of workers, families, and an entire industry of out of school providers ranging from after schools programs to mentoring programs for schools.  Apparently, it’s better to downplay evidence that supports decisions that might be disruptive to certain aspects of the status quo or anger one set of interest groups and focus instead on business-sounding solutions such as theories of action that improve human capital pipelines, or vilify teachers unions.

This points to a larger issue related to social science research and educational research in particular.  As much as the current crop of reformers likes to throw around the term “evidence-based decision making” and touts their use of “data,” more often than not the policies they espouse are based on “theories of action” or broad interpretations of select research studies data rather than the incontrovertible evidence they demand from those who oppose their ideas.  Certainly, there are research studies that can be interpreted as supporting the policies they want to adopt, but there is usually also research indicating the policies are likely to make no difference or have a negative effect.

Moreover, decision-makers are as likely as not to ignore evidence/data that does not conform to their ideas of what “good” data should look like.  An example:  college prep charter schools are all the rage right now since their data seem to indicate that the students in their schools out-perform their peers in traditional public schools.  Now there is conflicting data about the extent to which these schools, sometimes intentionally and sometimes by virtue of the lottery-based admissions policies, cull out students with special needs, behavioral problems or the lack of social and family supports.  We also know that parents with real choices about where to send their children, whether through where they buy houses, where they choice in through public choice systems, or where they send their students within the private sector, are not choosing these school models for their children.

Admittedly, the information about where affluent parents send their kids is based less on a statistical algorithms, and more on statistics about housing prices in specific neighborhoods and the demographics of students in different types of schools, but it is certainly data.  And it could be considered when deciding whether or not particular school models provide the full range of educational and social opportunities that parents might want for their kids.  It is never really discussed because it does not support a policy stance towards these schools that most urban district leaders and policy reformers have already decided they wish to take.

So let’s admit that decisions around something as complex as improving education are almost always based on selectively-identified evidence that support the policies a particular group wants to promote.  At least then we could begin to have frank conversations about the values that underpin these decisions rather than spending time throwing conflicting yet “real” data against those who take a different position.

Have your say

Share This