The Shortcomings of Current Educational Research

Apr 5, 2023 9:00:00 AM

by

There is an undeniable need to document impact, determine effectiveness, and grow the research base in support of learner-centered education so states and districts can confidently pursue evidence-based change. 

But it’s irresponsible to ask schools, educators, and students to rethink teaching and learning while continuing to measure progress with the same outdated and biased research methods. And it’s not just about what’s measured, but how.

Conventional wisdom suggests “good research” is validated through principles like rigor and standardization. But for large-scale efforts such as statewide personalized, competency-based approaches, we can’t make assumptions. 

Ultimately, education is about living, breathing humans whose needs change over time. The effective collection and analysis of data depends on the ability to make sense of information as it is unfolding in real time, over extended periods, and across multiple layers of the education system. 

In the name of rigor, research has privileged so-called “hard data” over one of the most fundamentally human experiences – teaching and learning.

Abandoning One-Size-Fits-All

At KnowledgeWorks, instead of positioning ourselves as outsiders and neutral experts (both false constructs), we seek to build trust in our findings through an avid commitment to democratized research and data equity principles such as access, contextualization, defensibility and transparency. 

For example, on more than one occasion our research team has been asked by a district or state partner to remove language from a question on a survey instrument that could be considered “triggering” in their community. Decisions on whether to approve or deny a request are made by considering the extent the change will impact the data sought by the original question, the interests of those making the request, the voice of the populations most impacted, and an established set of research guidelines. In each instance, our decisions and reasoning are openly shared. 

A researcher primarily committed to rigor might refuse changes outright, as it could compromise the comparability of data or infringe on the researcher’s perception of subject matter expertise. This approach puts methods before people, reinforcing the ways research is done “to” teachers and students rather than with them. 

Standardization also runs contrary to the essential design philosophy of personalized, competency-based learning. 

Standardizing the order, delivery and time spent implementing new practices in education may work for a behavior experiment in a lab or when implementing a single, specific change in a school (i.e., new math software). However, standardization is neither an appropriate nor desirable trait of research for complex reforms such as personalized, competency-based learning. 

Research designs should be nimble and follow the implementation plan of a particular learning community. Measurement benchmarks and determinations of impact should align to conditions and frameworks, not rigid progressions. 

For instance, when designing an impact analysis for a statewide effort, a personalized research design accounts for desired changes at the student, school, district, community, cohort, and state level. It considers the individual nature of each learning community and their varied levels of readiness. 

While one participating district may go “all in” for two years on three of the 12 district conditions, another district in that state might be prepared to tackle eight conditions in two years. The outcomes in these two districts will look quite different after two years. 

A standardized research approach would measure each district at the same time for the same intervention and compare results to make an overall determination of impact. Any conclusion from this scenario would result in incomplete and meaningless findings. 

What Constitutes Good Measurement?

By contrast, impact assessments still measure outcomes, but make room for modifications to the measure based on mechanisms for feedback and lessons learned over time.

Mixed-method, multi-year studies are helping KnowledgeWorks to better understand how schools, districts and states are moving the needle on key performance markers such as competency, student agency and applied knowledge. 

We support participating districts in continuous improvement and chart impact over time. Our research also provides greater clarity and understanding on how we might improve the measurement approaches used.

Instead of defining “good research” through rigor and rigidity, let’s instead create an approach to measurement that is defensible and democratized. Only through modernizing measurement can we determine whether large-scale innovations such as personalized, competency-based learning produce desired results.

Please also read: Top Ed Researchers offer new tool to fine tune educational recovery efforts.

Photo by UX Indonesia on Unsplash.

Rebecca E. Wolfe

Rebecca E. Wolfe is vice president of impact and improvement at KnowledgeWorks.

Leave a Comment

The Feed

Explainers

  • Why Math Identity Matters

    Lane Wright

    The story you tell yourself about your own math ability tends to become true. This isn’t some Oprah aphorism about attracting what you want from the universe. Well, I guess it kind of is, but...

  • What's an IEP and How to Ensure Your Child's Needs Are Met?

    Ed Post Staff

    If you have a child with disabilities, you’re not alone: According to the latest data, over 7 million American schoolchildren — 14% of all students ages 3-21 — are classified as eligible for special...

  • Seeking Justice for Black and Brown Children? Focus on the Social Determinants of Health

    Laura Waters

    The fight for educational equity has never been just about schools. The real North Star for this work is providing opportunities for each child to thrive into adulthood. This means that our advocacy...