Wikipedia’s search engine dominance = informational homogeneity?

Nicholas Carr (of “Is Google Making Us Stupid?” fame) is a consistent source of thought-provoking but (in my view) off-base critiques of the information age in general and Wikipedia in particular. He has an interesting post on the Britannica Blog, “All Hail the Information Triuvirate“. This coincides with Britannica’s roll out of new features to invite readers to suggest improvements, and some of the usual impotent snipes from Robert McHenry and other Britannica editors. Wikipedia gets 97% of all encyclopedia traffic on the Internet, so they have little to do but whine about the culture that let this happen and/or try to learn from Wikipedia’s success.

A favorite tactic of Wikipedia critics is to bemoan Wikipedia’s search engine success. Carr demonstrates Wikipedia’s dominance of results from the most popular search engine (Google), showing that for ten diverse searches that he first ran in August 2006, then again in December 2007, and again this month, Wikipedia articles rose from an average of placement of #4 to being the #1 hit for all ten searches. Carr “wonder[s] if the transformation of the Net from a radically heterogeneous information source to a radically homogeneous one is a good thing” and has difficulty imagining “that Wikipedia articles are actually the very best source of information for all of the many thousands of topics on which they now appear as the top Google search result.” But this rings shallow without examples (say, for any of his ten searches) of what single web pages would be better starting points.

The idea that the Net has become “radically homogeneous” just because Wikipedia is often the first Google hit is absurd. Wikipedia itself is far from homogeneous, and indeed its great strength is the way it brings together the good parts of many of the other sources of information on the Internet (and beyond). Carr’s implication seems to be that without Wikipedia (the “path of least resistance” for information delivery) search results would be better and finding valuable web content would be easier.

Carr seems to conceive of Wikipedia as a filter placed over Google that lets through only a homogeneous mediocrity. Wikipedia is better thought of as refined version of Google’s method of harnessing the heterogeneity of the Internet; where Google relies on a purely mechanical process, Wikipedia brings together sources with consideration of the individual topic at hand and human evaluation of the importance and reliability of each source.

100 thoughts on “Wikipedia’s search engine dominance = informational homogeneity?”

  1. Wikipedia is the first in Google because its articles are the best on average, thus it is easier and cheaper (both for a search algorithm and a human reader) to identify them as (mostly) high quality than to assess the quality of individual pages. This is the very point of an encyclopedia (EB is not the best book for all topics, actually, it is probably not the best for any of them; but it is better for the most topics than any single book), so it is a bit strange to read this lamentation on information homogenity on the EB blog, of all places. They didn’t have any problems with it while they were the top suppliers of homogenized information…

    1. “Wikipedia is the first in Google because its articles are the best on average”

      I think this assertion is seriously wrong, regardless of the quality of Wikipedia articles. Even if the articles are good, which they often are, THE reason why Wikipedia is first in Google is simply because most bloggers and journalists are too lazy to do what Wikipedia itself suggest (“quote several primary sources”).

      They just add a link to Wikipedia. it’s the number of links given to Wikipedia even when a primary source exist that makes its pages first. That’s why I added lazy quoting of Wikipedia to my 2010 Online Loser Guide at http://stop.zona-m.net/node/66 (which also includes links to Wikipedia policies on quoting sources).

Comments are closed.