It’s that time of year again – the time for PSAT scores to be released and for parents and students to be happily surprised by the results. We observed this phenomenon last year, after the “New” SAT and PSAT were launched in October 2015 (PSAT) and March 2016 (SAT). Seemingly, roughly three quarters of the scores shared with us were what used to be considered “above average,” based on the percentiles listed. But that type of math won’t get you too far on the PSAT, and we wondered why there appeared to be significant “grade inflation” on the PSAT.
After wading through the research section of the College Board (the organization that creates and administers the “SAT Suite of Assessments” which includes the PSAT) website, and gathering our own anecdotal evidence from student score reports, we have concluded that one of the primary issues with PSAT scores lies in the way the percentiles are expressed. A percentile score has traditionally been used to let students know where they stand compared to other test takers. The percentile score showed what percentage of students (0-100%) scored below each individual. For some reason, this definition has now been changed by the College Board. A new set of percentile scores has been created and they are called the Nationally Representative percentiles. They include not just all test-takers, but all possible test-takers, meaning all students in the same grade who did and did not take the test. These percentiles are therefore highly inflated and misleading. When numerous scores of zero are included in the percentiles, those students who do have scores are clearly going to have significantly higher percentile scores. We see a wide range of scaled section scores – from the 600s into the 700s that all have 98th and 99th percentiles associated with them. (Art Sawyer of the Compass Education Group has written extensively about problems with the PSAT scoring in a three part series.)
SAT score reports also contains Nationally Representative percentiles. However, alongside these are the User Percentiles, which only include scores from actual test-takers and are therefore more accurate. Apparently, even these percentiles are higher than they would have been on the “Old” SAT, because they now include students “at or below” their scaled score. Previous SAT/PSAT percentile scores only represented other test-takers below one’s own scaled score, and by including scores both “at” and “below” one’s own scores, the percentiles themselves are increased.
Similarly, the entire SAT/PSAT scoring system has changed, and whereas the median total score on the 1600 point scale used to be a 1000, it is now approximately a 1060. Parents and educators used to the old system are seeing SAT and PSAT scores much higher than expected, not realizing that any comparison between their own scores from high school and their students’ current scores is not really comparing apples to apples.
What’s behind this scoring change? Certainly, we can’t ignore the fact that the ACT surpassed the SAT in popularity a few years ago, and the College Board has been working diligently ever since to reclaim the lead. Consequently, we saw the rollout of the new SAT in March 2016, which featured a number of significant changes, including the elimination of the dreaded “vocab” questions and the dropping of the quarter-point deduction for wrong answers. On top of this, there is now significantly more time per question on both the Reading and Writing sections of the SAT, compared to their ACT counterparts, which makes the SAT more attractive for students who struggle with these sections. These changes, in combination with the inflated scoring system and the especially misleading Nationally representative percentiles—not to mention providing free exams to local high schools—may indeed enable the SAT to win back some of the test-takers it lost to the ACT over the past decade.
What does all of this mean for you and your student? At Sandweiss Test Prep, our typical process has always been to encourage juniors to take full-length diagnostics for both the SAT and the ACT, and then we use the results to recommend one test over the other. As part of this process, we would often spare some busy kids the need to take a four-hour SAT diagnostic if they wanted to use their PSAT scores as a proxy, even though the PSAT is a notably shorter exam. However, we no longer feel comfortable doing this, for the reasons outlined above. Instead, we strongly suggest taking both a full-length SAT and ACT diagnostic in order to determine which test is better-suited to your son or daughter’s skill set. We want to ensure that every student is prepping for the right test for the right reasons.