For the last few days I’ve been stewing about one of the article in the recent Wikipedia-edition of the epistemology journal Episteme. (See the Wikipedia Signpost for summaries of the articles.) I don’t find any of them particularly enlightening, but one just rubs me wrong: K. Brad Wray’s “The Epistemic Cultures of Science and Wikipedia: A Comparison“, which argues that where science has norms that allow reliable knowledge to be produced, Wikipedia has very different norms that mean Wikipedia can’t produce reliable knowledge.
I guess it’s really just another proof of the zeroeth law of Wikipedia: “The problem with Wikipedia is that it only works in practice. In theory, it can never work.”
Part of my problem might be that last year I blogged a comparison between Wikipedia’s epistemological methods and those of the scientific community, but came to the opposite conclusion, that in broad strokes they are actually very similar. But more than that, I think Wray’s analysis badly misrepresents both the way science works and the way Wikipedia works.
A central piece of Wray’s argument is scientists depend on their reputations as producers of reliable knowledge for their livelihoods and careers, and so their self-interest aligns with the broader institutional interests of science. This is in contrast to Wikipedia, where mistakes have little or no consequences for their authors and where a “puckish culture”, prone to jokes and vandalism, prevails. Wray writes that “In science there is no room for jokes” such as the Seigenthaler incident.
The idea that scientists are above putting jokes and pranks into their published work is belied by historical and social studies of science and by many scientific memoirs as well. James D. Watson’s Genes, Girls, and Gamow is the first thing that comes to mind, but there are many examples I could use to make that point. And science worked much the same way, epistemologically, long before it was a paid profession and scientists’ livelihoods depended on their scientific reputations. (I don’t want to over-generalize here, but some of the main features of the social epistemology of science go back to the 17th century, at least. See Steve Shapin’s work, which is pretty much all focused, at least tangentially, on exploring the roots and development the social epistemology of science.)
Likewise, the idea that Wikipedia’s norms and community practices can’t be effective without more serious consequences for mistakes seems to me a wrong-headed way of looking at things. On Wikipedia, as in science, there are people who violoate community norms, and certainly personal consequences for such violations are less on Wikipedia than for working scientists. But for the most part, propagating and enforcing community norms is a social process that works even in the absence of dire consequences. And of course, just as in science, those who consistently violate Wikipedia’s norms are excluded from the community, and their shoddy work expunged.
For a more perceptive academic exploration of why Wikipedia does work, see Ryan McGrady’s “Gaming against the greater good” in the new edition of First Monday.
6,776 thoughts on “Wikipedia in theory”
Joseph Reagle picks up the discussion on his blog, with “Wray and the Wrong Tree”:
The major difference between “science” and “Wikipedia” is that there is only one Wikipedia. In science, if you hold a minority position on a major issue, there is at least some chance that you’ll be able to get funding to pursue it, and even get published occasionally, and in the worst case you can always start your own journal.
However, in Wikipedia if you hold a position unpopular with one of the major power-brokering elites that runs rampant within the community, you are quite likely to be excluded from expressing your opinion, forcibly if necessary, and without any real regard to the merits of your argument or even the “niceness” with which you present it.
Of course, there’s a history of such exclusionism in science as well; continental drift was excluded from establishment science for many years, simply because one of its opponents held a great deal of political power within the establishment and suppressed publication of papers on the minority position.
At least in establishment science, the people who are standing in opposition to you have names and reputations, and you can lobby them and their sponsors, and eventually they retire, die off, or lose their funding. In Wikipedia, these people are anonymous, unapproachable, inpenetrable, and immovable, except through backstabbing political games played entirely within Wikipedia’s hothouse political forum. There is no way to reliably discover if they are resisting dissent out of a honest desire for truth, or instead because of undue outside influences, such as, say, sponsorship from a corporation with a concern on the issue. Ever notice how scientists are increasingly expected to reveal the sources of their research funding, because of concerns that such funding may be slanting their conclusions? How does one do this for a Wikipedia editor?
Kelly, you’re on to something interesting. The pluralistic nature of scientific publication, of course, goes hand-in-hand with limited authorship.
In science one can, often does, arguable should, exclude other major viewpoints one disagrees with. Since even mentioning (citing) work to criticize it can serve to bump its citation count without seriously diminishing its influence, it can be very difficult to recover the conversations that are going on under the surface of scientific literature.
But because there are many venues, for publication and the community is highly fragmented, censorship is not a major concern. The good science, it is assumed, gets sorted out in the long run, even if the vast majority of what gets published is utter crap. (When I worked in a biochem lab, the professor insisted that most publications were indeed wrong and/or useless.)
Wikipedia, in contrast, is a somewhat monolithic venue (I say ‘somewhat’ because there is are spheres of discourse outside of but still connected to Wikipedia) where there is an expectation of fairly representing, not simply ignoring, views one disagrees with. But the downside is that when the community norms are broken, it can be harder work around unfair exclusion.
I think your sense of timescale is a bit mis-matched for Wikipedia drama vs. science drama, though. Wikipedia editors, even influential ones, are likely to retire from Wikipedia much sooner than the average crotchety scientist. Naturally, when the norms of either science or Wikipedia break down, one has to move to the realm of politics to redress them (whether academic politics or wikipolitics). But the channels for doing so on Wikipedia are more arcane, if no more or less transparent, than the ones in academia (where, indeed, the personal career consequences of political drama can be much more serious).
Ultimately, I think Joseph Reagle’s sociology approach might be the best way to compare the epistemic cultures of science and Wikipedia. [I’m shooting from the hip here, as my knowledge of sociology and psychology is minimal.) The norms are mostly very similar in science and Wikipedia (and where they differ, either set would be capable of creating reliable knowledge). The main difference comes from how thoroughly they are disseminated. Scientists have a long, formal process of enculturation, and they identify as “scientists” much more strongly than typical editors identify as “Wikipedians”. Its the psychological consequences — commitment to the norms of science and the psychological costs of violating them — more than anything else that are effective in science in ensuring reliability. The same basic factors apply on Wikipedia, just less strongly.
Saying that there is no invisible hand on Wikipedia fails the basic reality check. If you compare Wikipedia with Uncyclopedia or Ecyclopedia Dramatica (now *that* one has a puckish culture), the difference is obvious. Any theory that cannot explain why Wikipedia is at least to some degree reliable is probably worthless.
Comments are closed.