Just got sent an interesting article from the Journal of Organic Chemistry written by its editor. He’s got some interesting things to say about their publications (or rather, their rejectied publications):
“In 2008, 15 manuscripts were deactivated because the authors were unable to provide original copies of reports for high-resolutionmass spectra or combustion analyses. By June of 2009, 13 of these manuscripts had been published in other journals. In six cases, the original datawere replaced by a new set that was consistent with the structures. In the other seven publications, the inconsistent data were left unchanged, were removed, or were replaced with another set of inconsistent data or data obtained by another analytical technique was substituted. Four of the manuscripts were submitted to other journals within only a few days after being deactivated by JOC.
While the number of manuscripts that JOC deactivated in 2008 because of unsatisfactory data and were subsequently published elsewhere was small, it is deeply disturbing that about a third of those authors chose to ignore the problems pointed out by JOC and submitted their manuscripts to other journals without adequately resolving the issues surrounding the data they originally reported. All of these manuscripts were submitted from academic institutions. The responsibility for this behavior clearly rests on the senior authors, who are setting a horrible example for their young colleagues.
C. Dale Poulter
I remember we once had a seminar about academic research fraud, which included a bit about data forgery that actually got published. The speaker pulled up a page of an article that had two side-by-side graphs. “See anything weird here?” We didn’t. Especially because it was a luminescence paper, and that prof loved his luminescence. Then he zoomed into the two graphs, pretty small ones if you’re just browsing the full page. “How about now?” They were identical. And we’re not just talking trendlines and major peaks. The tiny little ups-and-downs of the background signal were exactly the same. The guy ctrl-v’d some poor little experimental graph and used it show the similarlity of suppesedly different experimental runs. Busted! But not before he got that nonsense published.
I’m still kind of amazed that crap like this can still be pulled in the digital age. I mean, that’s the whole argument for the current pay system of non-free journals, right? They can afford to pay for better editors, are more reliable, and are thus more prestigious. I’m sure it’s happening less and less, but that seems like that kinda thing that should be caught on the first go-around. Gets me wondering about the free open-access science sometimes.