Data Sharing, Data Dumping & Claims of ‘Academic Fraud’ in Tweetstorm Over Story About Louisiana Vouchers
On Aug. 2, The 74 published a lengthy investigation by Matt Barnum into a slowly simmering controversy involving differing levels of access to data on Louisiana’s voucher program given to two groups of academics. The story’s publication sparked dueling recriminations between the researchers and a tsunami of tweets.
In short order, the question raised by the story — whether officials grant access to data at least partly according to policy priorities — gave way to another: Given the lightning-rod nature of the voucher debate, will researchers compromise their ethics to be first to a particular finding?
The arguments are multi-layered, but the gist is this: In 2016, researchers from MIT, Duke, and the University of California, Berkeley, published a study that showed that the first year of Louisiana’s voucher program — the most tightly regulated in the nation — led to marked decreases in student achievement.
The stakes were high and the findings immediately politicized: “Louisiana’s voucher program is one of the most tightly monitored in the nation: Participating schools can’t set admission criteria, can’t charge families tuition beyond the voucher amount, and — most important to the study — require students to take the same tests as their public school counterparts,” wrote Barnum, who now reports for Chalkbeat.org. “That the research showed those first-year test comparisons were negative was quickly taken up both by voucher critics as evidence that private school choice doesn’t work and by hardline school choice supporters as proof that Louisiana’s program is overregulated.”
After the study was released, Louisiana ended its data-sharing relationship with the scholars in question, according to public records obtained by Barnum. The convoluted chain of emails between the researchers and state officials revealed mounting tensions as Louisiana education officials pushed the researchers to wait for more complete — and positive — data and researchers pushed back because the later data were incomplete.
Not clear that it was based on not liking results. Tho there was push back on decision to pub with one year's data https://t.co/yQHkJpOBS2
— Matt Barnum (@matt_barnum) August 4, 2017
Can be legit reasons for not wanting preliminary results of an ongoing program released. I can attest. Regret this: https://t.co/ftpIfSOUrR
— Steven Glazerman (@EduGlaze) August 4, 2017
Fast-forward a year, and in June, a second set of researchers with the Education Research Alliance at Tulane University and the School Choice Demonstration Project at the University of Arkansas published a study of the Louisiana program using three years of more complete data and also finding a negative effect, albeit less pronounced.
Important consequence: Only one research team looked at year 3 of LA program & results turned on key method. choice https://t.co/C7iVCQhFff
— Matt Barnum (@matt_barnum) August 4, 2017
The 74’s publication of the story outlining the tensions immediately provoked social media chatter unearthing a second, angrier controversy, in which Jay P. Greene, the head of the Department of Education Reform at the University of Arkansas and a member of the second set of researchers, accused the MIT-Duke-Berkeley team of “academic fraud” for what he said was its failure to cite prior work by a Ph.D. candidate on Greene’s team.
“Why did it matter that they be first?” Greene asked on his blog in an Aug. 4 post. “By being first to release they could act like they had the original analysis rather than a replication. Top Econ journals tend not to be as interested in replications of a grad student’s dissertation. And by being first to release and not citing [the doctoral candidate’s] work they could act like theirs was the original analysis.”
The MIT-Duke-Berkeley team fired back with an extensively footnoted memo and a copy of its data-sharing agreement with the state of Louisiana. Greene responded in kind, as the attendant comments threads filled up with accusations and academic citations.
Barnum, Greene also opined, missed “an incredibly depressing story about how status and power in our field contributes to academic abuse and dishonesty.”
Or not, in the eyes of others:
Collegial to ask but not "academic fraud" if you don't. As noted by Walters, the UArk group didn't cite the thesis. https://t.co/2mwUJNhHkD
— Prof Dynarski (@dynarski) August 7, 2017
Either way, the scholarly cyber-conflict completely skirted what might be of more importance to the public and to advocates who are hungry for hard information about policies proven to help disadvantaged students. According to Barnum’s interviews, obtaining data about issues in education almost anywhere is frequently a delicate dance.
Most states operate on something of an ad hoc basis, the original story reported: Researchers may be able to get data, but only if they can convince some combination of education officials and politicians that their study is worthwhile. Sometimes that turns on an individual researcher’s relationship with key policymakers, noted Dan Goldhaber, a professor at the University of Washington Bothell and director of the Center for Analysis of Longitudinal Data in Education Research.
“I would say it’s a discretionary process,” Goldhaber was quoted as saying, “which … opens the door for politics.”
Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter