Most reading comprehension assessments are grounded in the belief that reading is a specialized set of skills, such as finding the main idea or identifying the author’s purpose. This may be one of the ways in which the assessment cart drives the instruction horse: on state assessments, students read a random passage they’ve never encountered before (known as a cold read) and answer questions related to main idea and purpose. Sounds simple, right? Read a passage at your grade level and answer a few questions aligned to your grade level’s standards.

In school systems, we can see that these tests have driven reading instruction for quite some time. Teachers look at samples from state assessments and do everything in their power to make sure what they teach in their classroom looks as much like the state assessment as possible. And this isn’t all bad—instruction should be aligned to the expected outcome. Unfortunately, the overt focus on skills has led to a misplaced emphasis on skills-based reading lessons in which the topics of the texts being read have taken a backseat to the skills being practiced. The problem is, the topics of the reading passages heavily impact a student’s comprehension… and many educators don’t know that this is the case.

“The Baseball Study” by Recht and Leslie (1988) should be required reading on assessment of reading comprehension. The outcomes of this study tell us that a student’s prior knowledge of a topic has a greater impact on his or her ability to comprehend a text than generalized reading ability. In other words, being a “good” reader isn’t going to help you very much if you aren’t familiar with the topic you are reading about.

We know this disproportionately impacts children from lower socio-economic backgrounds. Children from homes with higher incomes tend to be exposed to a broader array of topics and knowledge through reading, discussions with parents, and family trips to places with historical significance, varying geography, or diverse cultures. These experiences begin to build a child’s web of information—a tool they’ll use to make meaning of new information as they read in class. On the other hand, a student who isn’t afforded these opportunities has the greatest need for the knowledge-building opportunities provided by his teacher and school.

In response to this research, a growing number of schools and districts have moved to adopt knowledge-building curricula. My district (Jackson-Madison County Schools, TN) made this switch two years ago; we now use two elementary ELA curricula – CKLA and EL Education – designed around many science and social studies topics, so that students learn about key topics as part of reading and writing instruction. The goal is to ensure all students build a bank of knowledge on a variety of topics in science, history, literature, and the arts, so that the knowledge gaps created by socio-economic factors are decreased over time and replaced with a more consistent knowledge base.

Currently we’re in a distance-learning era, and I must say, knowledge-building curricula “travels well;” it’s a lot easier to help students study these knowledge-rich topics with parents than it would be to coach parents on “find the main idea.” Providers of high-quality curricula, such as CKLA (Amplify), EL Education, and Wit and Wisdom (Great Minds), have stepped up to the plate to provide teachers and parents with the option to continue knowledge-building at home. And although internet access remains a barrier for many students, a district that has aligned their efforts behind a knowledge-building curriculum stands a better chance of translating high-quality materials into paper-based assignments for use at home as well.

But how does this translate to reading tests? Does implementing a knowledge-building curriculum mean reading scores will go up this year? Or the next?

Although it is possible for implementation of a knowledge-building curriculum to lead to early gains in reading, I’m concerned that district efforts to improve reading in this way will be declared a failure if scores don’t rise quickly enough. (My district has seen positive indicators in the first two years, including much higher scores on phonemic awareness and nearly a 5% gain in 3rd grade reading proficiency… but I expect that significant gains might take time, as I’ve written previously.) As educators, we are quick to discard initiatives that don’t produce results immediately, even when we know deep down it isn’t that simple.

Also, measuring progress when implementing knowledge-based curricula is complicated by the aforementioned structure of traditional reading assessments. Reading passages on assessments are generally selected based on qualitative and quantitative complexity, but with little thought to the topics of the passages. Since most districts don’t share a common curriculum, why worry about the topics of the passages?

For assessments to truly measure the progress of students in the first few years of a knowledge-based curriculum, the texts included on tests need to be on similar topics as those that were read during the school year. For example, our 5th grade students read about biodiversity in the rainforest, Jackie Robinson and the Civil Rights Movement, and the impact of natural disasters. Despite how much a student learns on these topics, if he or she has general knowledge gaps and the test includes an anchor text on something foreign to the student, the student most likely won’t perform very well. In this event, do the test results tell us anything about how much progress the student made this year, or how good of a reader they are? Or did it just confirm that significant knowledge gaps still exist?

In an effort to better gauge student progress in a knowledge-based curriculum, Louisiana has moved down a path to assess students in reading on the topics covered in their grade’s reading curriculum. The state can do this due to the tremendous efforts put in to implementing the Louisiana Guidebooks, a statewide English/language arts curriculum rolled out in 2014. Although the initial results of this pilot are still pending, the research suggests this is a promising practice. Limiting passages on reading assessments to those topics covered within the reading curriculum controls, to some degree, for the variability in prior knowledge amongst students. This, in turn, should remove the pre-existing bias towards middle and upper income students present with most current reading assessments. Only by controlling for the impact of prior knowledge will we ever truly be able to measure the progress of students in the first few years of experiencing a knowledge-based curriculum. This will help us as a profession to resist the urge to switch tracks in a year or two when we don’t see the dramatic gains we so desire. Then, in theory, after five years or so of students matriculating through a range of topics in successive grades, the variability of scores on cold read assessments should decrease.

As a profession, our collective handwringing over reading scores has gone on for a long time. And truthfully, we know too much about how students learn to read for it to go on much longer. The question is, will we know when we are on the right track if we aren’t looking for progress in the right places?

– Jared Myracle


Curriculum Notes

If you’d like to speak with me about the specific curricula used in my district, they are:

K–2 ELA:Core Knowledge Language Arts (all-green on EdReports, Tier 1 on Louisiana Believes)

3–5 ELA:EL Education Language Arts (all-green on EdReports, Tier 1 on Louisiana Believes)

6-12 ELA: LearnZillion Louisiana Guidebooks (Tier 1 on Louisiana Believesin grades 6–8)

K–12 Math: Eureka Math (all green on EdReportsin grades K–5, Tier 1 on Louisiana Believesin K–12)