I recently heard my friend Shawn McCusker say an iPad is like a hammer–a good tool, if your problem is a nail. Use the tool correctly, and we can build some good things. Misuse or overuse it, and problems arise. We can say the same about Lexiles, a popular measurement tool used to rate text complexity of books and other reading material. Although the specific Lexile formula is just a little over my head, a Lexile ranking does a reasonably good job of doing what it is designed to do: measure text complexity. But a Lexile ranking by itself doesn’t do a good job at all of establishing the suitability of a particular book for an individual reader or a specific class of students.
I hope we can all agree that part of helping students grow as readers is developing their ability to process texts with ever-increasing levels of sophistication. A tool like a Lexile ranking can be helpful in reaching that goal, but it cannot do all the work. Even the Lexile.com web site speaks of the need to keep the Lexile number in context: “A Lexile text measure is based on the semantic and syntactic elements of a text. Many other factors affect the relationship between a reader and a book, including its content, the age and interests of the reader, and the design of the actual book. The Lexile text measure is a good starting point in the book-selection process, with these other factors then being considered.”
That last sentence reminds us that while considering the potential value of Lexiles we should also remember that Lexile is a product of MetaMetrics, a corporation that develops measurement tools for use in education. In other words, Lexiles are developed and sold by a profit-driven company. Consider yourself warned. There is no educational reason why a Lexile should be “a good starting point” while students’ more individualized needs are relegated to later consideration.
The appropriate use of Lexile rankings becomes murky because they are included in the language of the Common Core State Standards (CCSS). Whenever CCSS enters a conversation, the issues become distorted due to the controversies surrounding these standards. This is especially true when it comes to Lexiles. Before CCSS, MetaMetrics offered a set of Lexile ranges based on grade levels. Then CCSS expectations led MetaMetrics to raise the appropriate ranges for the same grade levels. The new numbers seem sort of arbitrary, which makes me wonder if MetaMetrics pimped out its Lexile numbers to satisfy the needs of the new standards system. Were the old ranges “wrong”? Are the new ranges “correct” only because CCSS says they are?
Shake if off. Don’t lock on to specific numbers. CCSS language also says that when choosing a title for an individual student or class we should consider qualitative aspects of the book (levels of meaning, complexity of the narrative arrangements, prior knowledge requirements, etc.) as well as how a specific book fits a specific reader in terms of its emotional challenges, edginess, and interest level. There are plenty of reasons for skepticism about CCSS, but they get this right. Quantitative measurements should only be one part of how we choose books for students.
But let’s keep the focus on developing readers rather than on CCSS. How can Lexiles help us achieve our goal of exposing students to texts of appropriate complexity? For comparison purposes, consider the Lexile numbers for several titles commonly found in schools:
Because of Winn-Dixie 610
The Hunger Games 810
The Sound and the Fury 870
To Kill a Mockingbird 870
Diary of a Wimpy Kid 950
Harry Potter and the Deathly Hallows 980
The Great Gatsby 1070
The Diary of Anne Frank 1080
The Sun Also Rises 1420
Those numbers seem relatively accurate if we consider that The Sound and the Fury has some narrative elements that are deceptively simple, while Diary of a Wimpy Kid includes a narrator who from time to time throws out big words and convoluted sentences. No one in his right mind would say that Diary of a Wimpy Kid is overall a more sophisticated text than The Sound and the Fury, but looking only at the syntactic features of those books might lead us to that conclusion. Similarly, looking only at the quantitative aspects of The Diary of Anne Frank would erroneously lead us to believe that it has more literary sophistication than The Sound and the Fury or The Great Gatsby. If we look at Lexiles in isolation, we are using them wrong, and we will likely make bad choices for our students.
But just as we shouldn’t look at Lexiles in isolation, we also shouldn’t look at the qualitative and reader-dependent aspects in isolation.
Let’s consider To Kill a Mockingbird in more depth. This beloved novel’s Lexile is 870, which CCSS says is just about right for grades 4-5 (or early sixth grade in the old Lexile grade level system). If we consider qualitative factors, it’s hard to make the case that To Kill a Mockingbird is a good choice for grades 4-5. That novel’s emotional complexity and cultural context arguably places it more in the grade 7-8 zone. Some readers may not be quite ready for it even then, which means To Kill A Mockingbird is probably best suited for late in grade 8 or early grade 9. But let’s go back to that Lexile for a moment. Is To Kill A Mockingbird our best choice for a whole-class novel in grades 8-9 if its Lexile indicates grades 4-5? Is there another title we should choose for ninth grade that is in sync both qualitatively and quantitatively? All of this can vary from school to school and class to class, of course, but the further a book is from the center of those ranges, the more we have to question the choices.
Here is another potential Lexile-related problem. MetaMetrics not only assigns Lexile numbers to books, but the company also tests students and assigns Lexile numbers to them. This, theoretically, makes it easy to match up readers with the right books. Sorry, but I can’t get behind that. Books are static; readers are not, especially young readers. Too much potential for misuse exists when we simplify students into metrics. A Lexile or any other system should never be used to deny a student access to any book or text. Teachers can warn them about potential difficulties, but motivation can make a difference to a struggling reader who wants to stretch beyond her numerical classification.
So when it comes to Lexiles maybe we should use common sense and professional judgement. Yes, a book’s Lexile ranking has some value, but that value is most clear when we see it alongside other aspects of the text, as well as compared with other metric ranking systems available (ATOS and Flesch-Kincaid, for example).
A Lexile is a tool, a pretty good tool. Tools work best when used for the right purpose, in conjunction with other tools, with a smart carpenter making good decisions.
I’d like to hear your stories about how Lexiles are used in your school and your teaching. As always, thank you for reading.