Honestly? This is baloney!
The blog where we expose the real truth about proficiency, achievement levels and accountabaloney.
Currently, Florida’s Board of Education is contemplating raising achievement levels (“cut scores” or “passing levels”) for the new Florida State Assessment (FSA) to match the much higher “proficient” achievement level on the National Assessment of Educational Progress (NAEP), thus eliminating the so-called “honesty gap”. Floridians have been told that it is important that we are honest with our students now, rather than have them discover their academic shortcomings later. Such an important measurement, one that will affect how many students pass 3rd grade or earn a high school diploma, deserves a serious look.
In general, if you are going to create something called an “honesty gap” and then use it to accuse states of lying to students about their math and reading abilities, you should be extra careful to confirm you are, in fact, telling the truth. It would be pretty bold to create a deceptive measure and then call it the “honesty gap”. Just sayin’.
The Invention of the Honesty Gap
The first reference to this “honesty gap” appears to be in a May 2015 report from Achieve (read it here, if you must). Achieve is an organization dedicated to corporate education reform. It helped develop the Common Core State Standards (CCSS), write the Next Generation Science Standards (NGSS) and served as project manager for states in the Partnership for Assessment of Readiness for College and Careers (PARCC). Achieve suggested that state test results and the NAEP often tell “conflicting stories about students’ proficiencies in math and reading.” They called the discrepancy, between state proficiency results and NAEP proficiency rates, a “proficiency gap”, claiming too many states were “saying students are “proficient” when they are actually not well prepared.”
This summer, the Foundation for Excellence in Education, the education reform group founded by Jeb Bush, began promoting this discrepancy as the “honesty gap”, encouraging citizens to learn about the “level of dishonesty in their state” through websites such as whyproficiencymatters.com and honestygap.org. In Florida, they even launched a social media campaign encouraging Floridians to email Governor Rick Scott, asking him to protect children from being “victims of Florida’s proficiency gap” and “ensure we raise the bar so that our children can succeed in college and careers.”
Question: Were Achieve and FEE being honest about this honesty gap? How safe is our children’s education?
A look at NAEP and its achievement levels.
NAEP, also known as The Nation’s Report Card, was developed in 1969 to provide information on how well US students were performing over time. It is frequently referred to as the “gold standard” of student assessments. By 1990, criterion based achievement levels (basic, proficient, and advanced) were added to NAEP and comparisons across states became possible. Initially, NAEP participation was voluntary. Since the passage of No Child Left Behind (NCLB), participation in 4th and 8th grade Math and Reading NAEP assessments has been required as a condition to obtaining Title I funds. NAEP obtains its data by sampling; a sample of 4th and 8th grade students from each state participate in the biannual assessment.
No Child Left Behind, of course, required the annual testing of all students between 3rd and 8th grade with the goal of 100% grade level proficiency for all students, in reading and math, by 2014. Of course, that was an impossible goal.
It doesn’t take much research, however, to realize that the disparity between state proficiency levels and NAEP proficiency level (the “honesty gap”) can, in a large part, be explained by differences in the way each test defines proficiency.
The Center For Public Education explains the discrepancy this way (emphasis added):
Just because NAEP and states label a level of achievement “proficient” does not mean they are defining proficiency the same way. Nor are they necessarily testing the same knowledge and skills.
For example, the National Assessment Governing Board (NAGB), the board that oversees NAEP policies, states that “In particular, it is important to understand clearly that the Proficient achievement level does not refer to ‘at grade’ performance” (Loomis and Bourque 2001).
Unlike NAEP, NCLB requires states to define proficiency in terms of being on grade level. A report developed by the U.S. Department of Education specifically states that “The proficient achievement level [for NCLB] represents attainment of grade-level expectations for that academic content area” (U.S. Department of Education 2007).
A report commissioned by the NAEP Validity Studies Panel found that the most appropriate NAEP level for gauging state assessment results should be the percentage of students at or above the basic achievement level (Mosquin and Chromy 2004) Even U.S. Secretary of Education Margaret Spellings urged reporters to compare state proficiency rates to NAEP’s basic level (Dillon 2005). Moreover, James Pellegrino, lead author in the National Academy of Science’s 1999 evaluation of NAEP called Grading the Nation’s Report Card, suggested that perhaps proficiency for accountability purposes lies somewhere between NAEP’s basic and proficient levels because NAEP’s proficiency level was not developed with accountability in mind (Pellegrino 2007).
To repeat, with NAEP, “proficiency” DOES NOT refer to “at grade performance” and, by law, state tests MUST define proficiency in terms of being on grade level. In other words, NAEP standards are aspirational, whereas state standards are set for minimal competency. I wonder why, when the distinction between the two definitions of proficiency are so clear, would organizations like Achieve and FEE confuse them?
Is it possible that the only honesty problem we have is coming from those who insist on equating the NAEP and FSA definitions of proficiency? The two are simply not the same.
Additionally, NAEP proficiency levels were NOT developed with accountability in mind but, clearly, FSA test scores are primarily used for accountability purposes. If Florida were to follow FEE’s advice, and raise FSA achievement levels to match NAEP levels, all the ramifications of Florida’s test and punish accountability system would come into play (see our “Ingredients” post here). The establishment of cut scores for the FSA is not made in a vacuum. If fewer children pass the Florida assessment because of higher cut scores, than more children will suffer the mandated repercussions.
Keep in mind that NAEP only tests students in grades 4 and 8, so proficiency levels for the other grades, particularly grades 3 and 10 where Florida’s high stakes are greatest, would have to be extrapolated from a few NAEP data points.
Matching FSA passing levels to NAEP proficiency levels, in the atmosphere of Florida’s test and punish accountability system, would be inappropriate and misguided. Predicting the extent to which that might happen, requires an understanding of the relationship between NAEP scores and other student achievement measures.
Student Achievement and NAEP
The Center for Public Education, also, reports interesting data regarding student achievement, international achievement and NAEP scores (details and references here):
- No country, even those performing highest on international assessments, would have 100 percent of its students reach NAEP’s proficiency level and no country would have 100 percent of its students reach NAEP’s basic level (Phillips 2007).
- Looking at high school math students, 44% of “A” students in math would score below the NAEP proficient level and 80% percent of “B” students would score below proficient (Scott, Ingels and Owings 2007).
- 32 percent of seniors who had completed calculus would not reach the proficient level on NAEP (Scott, Ingels and Owings 2007).
- According to ACT, a score of 22 on the math portion of its college admissions test indicates that a student is ready for college algebra (Allen and Sconing 2005) but almost 80% of students who scored between 21 and 25 on the ACT mathematics assessment scored below NAEP’s proficiency level for math (Scott, Ingels and Owings 2007).
- As expected, 91% of high school seniors who scored at NAEP’s advanced level in math and 79% of those who scored proficient went on to earn a 4 year bachelors degree(Scott, Ingels, and Owings 2007).
- Students who performed below proficient also had significant rates of obtaining a 4 year bachelors degree, especially in comparison to the nation where approximately one third of adults receive a four-year degree. A full 50% of students scoring at the basic level went on to receive bachelors degrees and almost 20% of those who performed below basic on NAEP received bachelor degrees (Scott, Ingels, and Owings 2007).
What Florida should expect if our “proficiency gap” is closed:
If nearly a third of calculus students are unable to reach proficiency levels in Math, it becomes clear that NAEP cannot possibly define “grade level proficiency.” When half of students scoring at basic level go on to complete college, it would be dishonest to suggest to those students that they were “not college material.”
If students who reach grade level expectations are still not ready for college, perhaps it is the system that focuses almost entirely on grade level performance on standardized tests that is to blame.
At the October 28th Florida Board of Education meeting, it was suggested that parents needed to have achievement levels better explained to them. It appears the BOE would benefit from similar instruction. Even the Commissioner of Education (who needed to check an online dictionary for definitions) might need a review.
If Florida’s Board of Education decides to raise that proficiency bar, beyond grade level expectations, to match NAEP’s “proficiency” level and eliminate this so-called “honesty gap”, then board members should expect (based on the data above):
- Nine year old third graders, reading at grade level, will be marked for mandatory retention, forced to take summer school and additional assessments, and will be labeled as “bad readers”.
- Just under half of “A” students in math will fail their FSA Math assessment and up to 80% of B students will fail. These students will be made to believe they are “not good” at math and may be required to take remedial classes.
- Students who might have passed calculus, will take remedial math classes, instead.
- Large numbers of students, who are fully capable of obtaining a 4 year bachelors degree, will fail to pass the 10th grade FSA ELA, will be labeled “not college ready” and will be assigned to demoralizing remedial reading classes and exam retakes.
- Test prep will increase for all students as teachers struggle to get all children above grade level, a mathematical impossibility.
- Teachers and schools could be negatively impacted even if all students are performing at or above grade level (yet below “proficiency”).
- Children who are performing at a developmentally appropriate level will be told they are poor achievers.
When the ramifications of fewer children passing these tests occur, it will no longer be considered “unintended consequences” but the direct result of these actions by this Florida Board of Education, who will have failed to understand that grade level proficiency and NAEP proficiency are NOT synonymous.
At the end of the day, the most important question the board members should be asking is: “Does the level 3 achievement score, for every subject, every grade, represent developmentally appropriate grade level expectations, as required by NCLB?” Additionally, since it represents the difference between promotion and retention, “Does the Level 2 Achievement Score for 3rd grade reading represent a developmentally appropriate, minimally acceptable, grade level expectations?” Of course, these are questions that can only be answered by educators, not advocates or ideologues. Please, ask the educators.
P.S. There are rarely times when I cannot see both side of an issue. This is one of those times. This Honesty Gap is clearly based on the misrepresentation that the determination of “proficiency” on NAEP and the state’s requirement to monitor grade level proficiency have anything in common other than the word “proficiency”. Could it be that the reformers really don’t understand or do they expect their followers to follow blindly without question? What would happen if states heed the call of FEE, Aspire, and Mr. Padget, and raise state achievement levels to match NAEP’s high bar? Millions of children will fail state tests and the reformers will have “proved” our public schools are failing. These organizations are either seriously ignorant or they are being completely deceitful. I am hoping common sense and true honesty (not this deceitful reformer kind) will prevail and cut scores will be set based on educationally sound principles rather than political baloney.
there are some good references cited here; it is important to read them and get details …. here is another reference that I appreciate: Grover “russ” Whitehurst at Brookings: “Since participation in the assessment in math was first required of all states in 2003 under the federal Elementary and Secondary Education Act, the average biennial uptick on eighth grade math had been 1.5 points. Assuming no significant demographic changes in the nation between 2013 and 2015 and business as usual in the nation’s schools, the expected score for 2015 would have been 287 (rounded). Thus the 2015 results are roughly five NAEP scale points lower than would have reasonably been expected. A five point decline in NAEP would mean that eighth graders in 2015 were roughly six months of school behind eighth graders in 2013. The same calculations using the more conservative actual decline of three points leads to an estimated loss of roughly four months of school.[1] ” there are a lot of details in this report/findings as written by Grover Whitehurst; I basically trust him yet I know that we have a lot of questions after reading these reports and the details are very important…. For example, selection of population (who is taking the test)…. in different countries the population might vary as to who is actually selected to take the NAEP and this is only ONE of the many details that will have to be examined before conclusions are drawn. Are the students selected to take the NAEP in your state actually the “typical” or “average” or do they vary greatly from the median/mean — it’s the old apples/oranges question. Who in your state is making the selections of the sample of students who will be actually taking the NAEP…. are representative districts chosen? How are they chosen? If you are in Singapore you select a totally different sample of students and you exclude many others so that your country will “look good”…. this is an age old trick.
another example: the U.S. is compared with Finland and then policy makers say Be more like Finland!!!!! but that is impossible. The U.S. is a complex society… if you say Finland is our example, one needs to know that Finland is only the size of our Wisconsin (that doesn’t take into consideration the diversity and the heterogeneity of our students and this is something that I am proud of — that we have heterogeneous and diverse student populations (not all the same as Finland would represent in terms of homogeneity)…. These are only a few of the issues — look at the details in the reports. Don’t take anything at face value because a politician told you so.