In his article “Question the Quality of Virtual Schools: NEPC Report on K12 Uses Flawed Measures of School Performance” author Matthew Chingos correctly argues that school-level quantifiers for school performance (as used in the report) were flawed in making conclusions about online school quality. Luckily for Dr. Chingos, I have started to run the analyses that he wanted as to be more clear in the claims that the virtual schools we have right now have not been promoting effective learning for our students.
I used student-level TAKS data (roughly about 2 million students) for the state of Texas and was able to merge on an “online school” variable to see the performance of students in online schools. Not only did the students perform worse, but they moved down along the distribution when compared to their peers. What this means is that say student level A was ranked in the 50th percentile of his peers in 2010. Then, after a year of online schooling, this same student would fall in these rankings and be in the 49th or 48th percentile. I have controlled for the demographic variables available in the dataset, but most importantly I looked at the student test scores from one year to the next to determine the “value added” of these online schools. When I ran this by age group it seemed like older online students fared better, so that is something worth considering (ie. maybe online school is more worthwhile for older students, but not younger ones) . Note: I submitted this paper to AERA and am deciding on a journal to send it to when it is ready. I included two of the overall tables to this post, but might have to take them down if the paper gets published. This is only online schools in Texas, but they run a K12 program, so that should tell us something, don’t you think?