In Cross-checks on ethics, I wrote about how well-meaning journals and conferences can miss ethics violations that include rigging experiments, making up data, padding the co-author list, and other cheats of that sort. Legitimate publishers of scientific studies and data can only go so far in validating what they’re asked to publish, and sometimes bogus papers get through.
There’s another side to the bogus-publication coin, though: the enablers. These are the journals and conferences that specialize in providing a forum for questionable — or downright garbage — studies and research reports.
At the questionable end are studies whose funding causes conflict-of-interest concerns, ones lacking rigorous methodology, and ones with insufficient data to deduce anything from the results. These problems usually are caught during peer review, if the papers are otherwise honestly written, and the top journals and conferences reject them. It’s easy to see how students, research faculty, and professors, living in a publish-or-perish environment, look to less reputable outlets for their work.
We’ve recently heard that Elsevier colluded with Merck — the pharmaceuticals company that made Vioxx, and that makes Fosamax, Vytorin, and Zocor — to produce a fake journal, one that looks like a peer-reviewed publication, but isn’t:
An “average reader” (presumably a doctor) could easily mistake the publication for a “genuine” peer reviewed medical journal, he said in his testimony. “Only close inspection of the journals, along with knowledge of medical journals and publishing conventions, enabled me to determine that the Journal was not, in fact, a peer reviewed medical journal, but instead a marketing publication for [Merck’s Australian subsidiary].”In fact, soon after that it came out that Elsevier had a whole series of such “journals”:
Scientific publishing giant Elsevier put out a total of six publications between 2000 and 2005 that were sponsored by unnamed pharmaceutical companies and looked like peer reviewed medical journals, but did not disclose sponsorship, the company has admitted.
This was particularly disturbing because of Elsevier’s reputation, and the extent of their publication world. But they aren’t the only outlet for dicey data. For years, now, there have been publications that will accept your work for a fee. That makes these pay-to-publish “journals” places where you can take that paper that’s been rejected everywhere else, and make it count on your résumé.
As I've participated in peer reviews, I’ve seen papers with no substance, and papers that are so far off topic as to be ridiculous (a mechanical engineering paper submitted to a computer science conference, for instance). Some people will submit anything anywhere, in the hope of getting something published.
But publications with no standards are... well, check this out:
So [Philip] Davis teamed up with Kent Anderson, a member of the publishing team at The New England Journal of Medicine, to put Bentham’s editorial standards to the test. The pair turned to SCIgen, a program that generates nonsensical computer science papers, and submitted the resulting paper to The Open Information Science Journal, published by Bentham.And, yet, the paper was accepted, and The Open Information Science Journal would publish it for an $800 fee, “to be sent to a PO Box in the United Arab Emirates.” The director of publications claims that they knew it was bogus and were just trying to smoke the author out by pretending to accept the paper. That excuse seems unlikely, though I would believe that they’d have taken his $800, had he sent it, and then thrown the paper out.The paper, entitled “Deconstructing Access Points” made no sense whatsoever, as this sample reveals:
In this section, we discuss existing research into red-black trees, vacuum tubes, and courseware [10]. On a similar note, recent work by Takahashi suggests a methodology for providing robust modalities, but does not offer an implementation [9].
The article goes on to mention the infamous World Multi-Conference on Systemics, Cybernetics and Informatics (WMSCI).[1] That conference, devised by Nagib Callaos (who claims to be a retired professor), and now in its 13th successful year, is basically a conference with no focus and no standards — and, hence, no standing — that exists for the purpose of making money by attracting participants. Speakers must pay the meeting fee to attend, which is what feeds the conference.
Most conferences expect speakers to pay the meeting fee, but what’s different here is the number of people they try to suck in, and the fact that they only allow each speaker to present one paper, and charge an extra fee (see here) if a speaker wants to present a second paper (for example, that of a colleague who didn’t have the money to travel to the conference). I know of no reputable conference that does that.
Other clues about WMSCI are these:
- The peer review process includes a provision for a “non-blind” review in which the author selects the reviewers (see item number 2 here).
- The enormous “program committee” — 284 members (see here). Normally, that’s the list of peer reviewers, but in this case it’s artificially inflated by the inclusion of, essentially, everyone who’s ever agreed to participate in the conference (and probably some who haven’t). It’s nice of Callaos, though, that he claims to have removed “those who manifested no interest.”
- The absence from the program committee of institutions that are respected in the field. There are no PC members from Columbia, Cornell, Princeton, MIT, Carnegie-Mellon, Stanford, or Georgia Tech, for example. But there is one from Quinnipiac University, a small school in Connecticut that has no doctoral program in Computer Science.[2]
- The unbounded scope (see here) leaves no topic behind. The conference covers everything from user interfaces to information retrieval to object-oriented programming to ethics and computer crime to security and privacy and hacking to artificial intelligence to computer graphics to wireless networks to gaming to....
- The lack of sponsorships from reputable companies and organizations.
A few years ago, in an incident similar to the one engineered by Philip Davis, above, the SCIgen folks at MIT got a nonsense paper accepted to WMSCI 2005, and planned to attend and present the paper with a nonsense presentation (see the “Examples” and “Talks” sections on the SCIgen page). They outed themselves, though and Callaos rescinded the acceptance. And David Mazières, then at NYU, submitted this paper to WMSCI 2005. According to his web page, “We never received official notification of whether the paper was accepted or rejected.” The figures are especially inspired.
Of course, silliness aside, these for-profit-only journals and conferences are a real problem, in that they serve as traps for the unwary. Someone reading a paper and not knowing that the journal isn’t reputable might base a major grant proposal on junk, wasting a lot of time and money and causing much embarrassment. And if you wound up having your paper accepted to a phony conference, would you, once you realized it, be willing to admit that you were hoodwinked. What would your boss, who paid for the trip, think?
Companies and schools involved in research know what the first-tier and second-tier conferences are, their positions decided by other researchers and based on the quality of the work presented and the selectivity of the program committees. In my field, the first tier includes, for example, MobiSys, Ubicomp, and CHI, among a host of others. Second-tier conferences and journals are fine, too — they just don’t draw the best research work as well as the first-tier does. The faculty at any research university will know what’s reputable and what’s not.
[1] Please don’t confuse WMSCI with WMCSA, now called HotMobile, the International Workshop on Mobile Computing Systems and Applications, which is a reputable workshop.
[2] I don’t mean to disparage Quinnipiac University, only to say that its inclusion on a program committee for a real cybernetics and informatics conference wouldn’t be appropriate, especially considering the schools that are not there.
1 comment:
A cousin to the conferences you describe are the pseudo-institutes (e.g., Discovery Institute, I think it's called). They publish unreviewed papers on topics like "micro-evolution" because it fits their expectations. Then others go on to cite the papers, duping thousands.
Post a Comment