LOLcats as Soulcraft

Apparently Clay Shirky is working on a new book. He tweeted a possible title today: LOLcats as Soulcraft. I’m not sure what the book will be about (or whether that was at all a serious suggestion) but as I interpret it, it dovetails with some ideas I’ve been thinking about.

“LOLcats as Soulcraft” appears to play off of the essay-turned-book “Shop Class as Soulcraft” by Matthew B. Crawford, which argues that working with one’s hands, craft work, is intellectually and emotionally satisfying in ways that other kinds of work–either abstract-but-circumscribed “knowledge work” or routinized physical work in the industrial capitalism mode–are not. Crawford argues that craft work connects makers to the objects they make and fosters, in the face of a consumer culture based on disposability and black box technology, an ethic of upkeep and repair and respect for fine workmanship.

Shirky, I imagine, would take that argument for the virtues of craft work and extend it to the virtues of building the virtual commons. Participation in the digital commons, creating LOLcats and YouTube videos and fan fiction and Wikipedia articles and citizen journalism and free software, etc., creates a new sort of relationship between cultural works and audience (or former audience, if you prefer). “If you have some sans-serif fonts on your computer, you can play this game, too.”

This line of thinking naturally leads to one of the main questions both Crawford and Shirky think deeply about: what will/should society look like in the future? In particular, what will economic life be like? The digital commons–as resource, but even more so as an ethic–has the potential to basically cut the legs out from under the knowledge economy that has been increasingly prioritized in rich-world culture (especially in education). Already, as Crawford points out, the logic of scientific management is being applied to “knowledge work”, essentially routinizing it and taking the soul out of it. And the more the digital commons can replace its capitalist counterparts, the harder it will be to find any paid work in areas like software and mainstream media, much less fulfilling work.

In the long run,the democratization of the tools of digital production and the extremely low costs of “mass producing” digital products means that we will be getting nearly everything that makes up the knowledge economy for free. So we may see an economy in the rich world that swings back towards physical goods and physical services. Modern mass production obviously can’t absorb many of those who will be displaced by the digital commons, so we will have to find new ways of getting by. Crawford’s hoped-for craft renaissance may part of that. Learning to use less stuff may be another part. Alternatively, we might see massive concentration of wealth in those companies that make most of our food and our physical stuff (and then possibly reforms to the political economy to redistribute much of that wealth). As long as people can meet their basic needs in the future economy (up to and including rich access to the digital commons), LOLcats–and everything else they symbolize for Shirky–could go a long way toward displacing the consumer culture need for limitless economic growth.

It’s pretty hard to imagine what changes are being sown by the rising digital commons, but I imagine Shirky has some good ideas.

The most insane bit of U.S. copyright law?


I knew about many insanities in U.S. copyright law, but I just came across something that is so absurd and unjust it makes me queasy.

My dad is a professional musician; he plays blues and jazz and original piano music, and has made five records. For professional musicians outside of pop music (and often in pop as well), copyright law is already simply a burden to the point that it is almost universally ignored. Gigging blues and jazz musicians have long used “fake books“, unauthorized charts of the melodies, lyrics and chord structure of jazz standards. No one is worried about other musicians infringing on their copyrights, because jazz and blues (among other genres) are rooted in a culture of borrowing and adaptation. It’s inimical to creativity to draw sharp lines between what can and can’t be borrowed or adapted, and indeed in academic jazz programs one learns to improvise by practicing the great “licks” on classic recordings.

But my dad, being the upright citizen that he is, has stuck with original compositions and reinterpretations of public domain classics on his albums. One classic he put on a 2004 album is “Love in Vain Blues”, a Robert Johnson tune that was first recorded in 1937. Johnson died in 1938, and the original recording was published on vinyl phonograph in 1938 or 1939 (without a copyright notice) and not renewed after the then-standard 28-year copyright term had ended.

But as the result of a series of utterly insane laws and court decisions, it turns out that the song may be under copyright through 2047. Today, issuing as sound recording is considered publication. But according to the 2000 decision in ABKCO v. LaVere, sound recordings published before 1978 don’t count as publication. So despite the publication, re-publication, and widespread adaptation of Johnson’s “Love in Vain”, it was never “published” before 1978 because there was no sheet music. And because it was created earlier but “published” first between 1978 and 1989, the crazy rules go into effect. (ABKCO is the record label for which The Rolling Stones recorded some Robert Johnson songs; LaVere is the man who in 1974 tracked down Johnson’s surviving heir and made a deal to pursue royalties for Johnson’s music in exchange for half the takings.)

Here’s a great article on the Robert Johnson copyrights: “Borrowing the Blues: Copyright and the Contexts of Robert Johnson“, by Olufunmilayo Arewa.

**UDATE**

Here’s another excellent article, arguing “that ABKCO, as well as the 1997 amendment to the Copyright Act that precipitated ABKCO, are legal anomalies that frustrate the intent of the Constitution.”

I just talked to my dad about it. He says “bring it on”. You can hear his version of “Love in Vain” at thesixtone, and on Wikipedia as soon as I transcode it. It’s pretty clear to anyone who a) knows how blues works, and b) knows anything about Robert Johnson and the lack of documentation about whether he even composed any particular song attributed to him, that there’s no basis for copyright claims on this stuff.

Database right and the NPG threat

The National Portrait Gallery’s legal threat against Wikimedian Derrick Coetzee alleges four things:

  1. Copyright infringement
  2. Database right infringement
  3. Unlawful circumvention of technical measures
  4. Breach of contract

The copyright issue, of course, is the center of the dispute. UK law is unsettled on whether mechanical reproduction of a public domain work is eligible for copyright.

IANAL, but breach of contract and unlawful circumvention both seem moot if there is no copyright infringement. A bit of text at the bottom of page (with no mechanism for the user to acknowledge or refuse) setting restrictive use terms for something that is public domain wouldn’t hold much weight. Likewise, even apart from the fact that Zoomify is not a security measure and arguably was not “circumvented”, if the images are public domain then simply collecting and stitching together tiles from those images (whether automatically or by hand) is perfectly legitimate.

Database right, therefore, is the only thing does not turn on whether ‘sweat of the brow’ copyrights hold up. The law here seems vague, but again, IANAL. The key question is what constitutes a “substantial part” of the contents of the NPG’s database. If the paintings themselves are public domain, then the mere unorganized collection of them ought not infringe on the database right, but depending on how much metadata and categorization comes from the same database, porting images to Wikimedia Commons might cross the line. For the images at hand, it looks like the amount of metadata is modest: subject, author, date, and author’s date of death. The NPG database contains significantly more information: medium, size, provenance, and other contextual information, as well as links to related works and people. It is also possible that Coetzee’s actions fall under the “exceptions to database right“:

(1) Database right in a database which has been made available to the public in any manner is not infringed by fair dealing with a substantial part of its contents if –

Self-preservation and the National Portrait Gallery’s dispute with the Wikimedia community

Running an organization is difficult in and of itself, no matter what its goals. Every transaction it undertakes–every contract, every agreement, every meeting–requires it to expend some limited resource: time, attention, or money. Because of these transaction costs, some sources of value are too costly to take advantage of. As a result, no institution can put all its energies into pursuing its mission; it must expend considerable effort on maintaining discipline and structure, simply to keep itself viable. Self-preservation of the institution becomes job number one, while its stated goal is relegated to job number two or lower, no matter what the mission statement says. The problems inherent in managing these transaction costs are one of the basic constraints shaping institutions of all kinds.

From: Clay Shirky, Here Comes Everybody: The Power of Organizing Without Organizations, pp. 29-30 (my emphasis)

Shirky’s book is about “organizing without organizations”, a key example of which is the Wikimedia community (as distinct from the Wikimedia Foundation). The Wikimedia community can accomplish a lot of big projects–making knowledge and information and cultural heritage accessible and free–that traditional organizations would find far too expensive. And that paragraph from Shirky explains the root of the tension between the Wikimedia community and many traditional organizations with seemingly compatible goals–organizations such as the National Portrait Gallery in London, which sent a legal threat to Wikimedian Derrick Coetzee this week.

The NPG has a laudable mission and aims: “to promote through the medium of portraits the appreciation and understanding of the men and women who have made and are making British history and culture, and … to promote the appreciation and understanding of portraiture in all media”, and “to bring history to life through its extensive display, exhibition, research, learning, outreach, publishing and digital programmes.”

But in pursuing self-preservation first and foremost, the gallery asks a high price for its services of digitizing and making available the works it keeps: to fund the digitization of its collections and other institutional activities, the NPG would claim copyright on all the digital records it produces and prevent access to others who would make free digital copies. As one Wikipedian put it, the NPG is “trying to ‘Dred Scott‘ works already escaped into PD ‘back south’ into Copyright Protected dominion”.

If the choice is between a) waiting to digitize these public domain works until costs are lower or more funding is available, or b) diminishing the public domain and emboldening others who would do the same, then I’ll choose to wait.

The history of the future of journalism?

In the wake of the Iranian election, lots of the people who focus on the changing journalism landscape have been talking about the significant role Twitter and other social media are playing in organizing and spreading news about the protests. Two of the leaders of the broad journalism discussion are Dave Winer and Jay Rosen, who have a weekly podcast called Rebooting the News. In the latest edition, Winer looks back to September 11, 2001 as the first time when the online social web foreshadowed the kinds of citizen journalism that Winer and Rosen see as a major part of the future of news. As he explains, he had no TV at the time but strictly through the Internet he was just as informed and up-to-date as he would hae been following the events of the day through traditional media.

Around 2001 is also the horizon for historians; for events after that, the archival richness of the internet accellerates from then until now in terms of the experience of ordinary people in major historical events and trends.

In that vein, here’s a paper I wrote in 2005 for a course on narrative history with John Demos, about the usenet traces of the kinds of the thing Dave Winer reflects on from 9/11. (I tried to weave in the pop psychology framework of the five stages of grief, to mixed results.)

—————-

We historians like to think that things develop gradually. Yet, in the microcosm, the events of the following months and years were foreshadowed there in the cyberspace of New York City on September 11. All the questions of “why?” and “what now?” were hashed out in the hours following the attacks by net denizens as they struggled to come to grips with the grief of the nation.

After one hour thirty-six minutes of denial, the messages on the NYC General internet discussion group started with a comment calculated to jump-start conversation, going straight to bargaining:

Tues, Sep 11 2001 10:20 am

WTC: Bound to happen

I wonder if this will change our assistance plans to Israel?[1]

Circumspection was the word for the first few replies; questions not answers. Who is sending us this message? Was it “internal” like the Oklahoma City bombing, or was it the Palestinians or someone else? Is it just a coincidence that this is the 25th anniversary of the Camp David Accords?[2] Whoever it was, they were clearly well-organized; they knew they had to use large planes with full fuel tanks to take out the World Trade Center Towers.

Just after noon, they were on to bin Laden as the likely culprit; it seemed like “his style.” Rumors that he had foretold an “unprecedented attack” two weeks earlier, including information from one woman’s unnamed friend from the intelligence community, provided one focus for the rising anger of the discussants. Israel and the celebrating Palestinians on TV were also popular targets of ire. Anger got the better of more than one:

Tues, Sep 11 2001 2:05 pm

Anyone cheering at thousands of Americans being murdered is a declaration of war as far as I’m concerned.

Tues, Sep 11 2001 6:02 pm

Did you all see the Palestinians dancing for joy today?

SCUM. Burn them all.

Calmer voices prevailed quickly, defusing talk of an indiscriminate crusade. But few seemed to doubt that war was on the horizon, even if not everyone had a clear idea of whom (or who) to fight:

Tues, Sep 11 2001 1:51 pm

>>>This must mean war.

>>With who?

>Afghanistan.

Any particular reason, or are you just starting [with] the A’s?

The possible complicity of Iraq was mentioned as well, and the failure to capture Saddam Hussein in the Gulf War illustrated how hard it might be to get bin Laden (if he was even the right target) in an Afghanistan war. But waging war on the Taliban, at least, might yield some human rights dividends, considering the way they treated their women.

The depressing, fatalistic seeds of the prolific conspiracy theories that developed in the months and years after the attack were there in the first hours too:

Tues, Sep 11 2001 11:39 am

I would not think (but I’m NOT an expert) that such impact would so weaken the structure as to cause both to collapse, without further destruction at a lower level.

and

Tues, Sep 11 2001 1:22 pm

I am just pointing out that I don’t think we can take out Bin Laden because if we could we would have done it long ago.

It would be months before online groups like the 9/11 Truth Movement would spin such speculation into elaborate tapestries of lies and manipulation, in which the strings are pulled by the man by behind the man behind the man (with three U.S. Presidents, at least, in on it), with bin Laden as the fall guy who was working for the CIA all along. But common sense prevailed quickly in this particular cyber niche; the combination of fire and impact would be able to take down the towers, with all that weight above the impact points, they reasoned.

Ultimately, the tension on the internet that day was between anger and acceptance, and with the bombers apparently dead and the looming possibility that there might not be anyone left to blame, the discussants turned on each other:

Tues, Sep 11 2001 8:10 pm

[On the subject of celebrating Palestinians and possible PLO involvement in the attacks]

>>Gosh, you don’t suppose the Isreali blockade has anything to do with it, do you?

>And what does this have to do with just buying food???

Don’t know how the blockade works, do you?

>>>They aren’t feeding their people, giving them housing or water – no

>>As a matter of fact they are, as much as they can. But when Isreal takes their land

> Of, forget it. You’re brainwashed.

This is coming from someone who can’t tell the difference between the PLO and other arab organizations.

and

Tues, Sep 11 2001 8:31 pm

> I’m not the one advocating bombing anyone.

Ha. So you just want to let them do this and get away with it, eh?

This was the worst of that first 111 message-long thread—tame compared with many of the other virtual shouting matches that developed that afternoon. And ultimately, the feelings of anger won out on NYC General, coming into line with zeitgeist of the rest of the nation as President Bush announced plans to hunt down the terrorists and those who harbor them. But elsewhere on the internet, then and now, every possible response from denial to acceptance has a place. And the stories will still be there waiting for us, for when we are ready to move on.

————————-

[1] This and all following quotes come from the USENET archive of nyc.general, as archived by Google Groups (http://groups.google.com/group/nyc.general). This discussion thread was started simultaneously on nyc.general, nyc.announce, alt.conspiracy (where it superseded such hot topics as “Moon Landings: Fact of Fiction?,” but did not change that group’s absurdist conspiratorial tone), talk.politics.misc (which was rapidly inundated with separate posts, preventing any sustained discussion), and soc.culture.jewish (where the endemic Zionist/anit-Zionist rhetoric drowned out this relatively moderate thread), and soon spread to other groups, fragmenting and spawning new discussions. There are probably hundreds of preserved usenet discussions documenting the immediate response of thousands of people on September 11.

[2] Actually the Camp David Accords were reached on September 17, 1978, making 9/11 just shy of the 23rd anniversary.

Biology Today, the ’70s textbook that would have made me a biologist

A few weeks ago, thanks to the blog A Journey Round My Skull (via Crooked Timber), I discovered Biology Today, an amazing college biology textbook from 1972. You can get the basics from the Wikipedia article I put together: [[Biology Today]]. But there’s a lot more to it than what I could put into a Wikipedia article without running afoul of the “no original research” policy–and a lot more than I can fit into a blog post. The reviewer of a bowdlerized later edition got it right: “The true story of the development of Biology Today would make an interesting book in itself.”

The text of Biology Today was apparently assembled from the work of a long list of “contributing consultants”. The list is star-studded, including James D. Watson and six other Nobel laureates (as well as Michael Crichton). The list–and the text–is dominated by molecular biology, which was reaching perhaps its cultural acme in the early 1970s.

A Journey Round My Skull has collected on Flickr many (but far from all) of the interesting and unusual “artist’s interpretations” and other images that make Biology Today such a magnificent artifact. Many of the diagrams are outstanding both aesthetically and conceptually.

The most lavish interleaf illustration is supposed to depict the “central dogma” of molecular biology with a three-panel view into the holy of holies, the DNA-filled nucleus, and a two-panel view of nucleic acids making their way into the cytoplasm and translating genetic information into proteins:

Biology Today nucleus

Biology Today cell interior

Molecular biologists, by the 1970s, thought of themselves not only as the future of science, but of culture more generally. Many adopted the scientific humanism that had been championed by the previous generation of public biologists like Julian Huxley, although the mechanistic and cybernetic worldview of molecular biology, rather than the neo-Darwinism of Huxley and his allies, was their gospel. For intellectually- and sexually liberated biologists (like Watson), anthropology and sexology displaced parochial religious ideas, and science had nothing to offer religionists but contempt or pity. Behold Noah’s Ark, from the chapter on “Human Sexual Behavior”:

Biology Today Noah's Ark

Evolution’s role in this textook is a curious one. The only well-known figure who can be considered primarily an evolutionary biologist is Richard Lewontin, a pioneer of molecular evolution and a frequent critic of adaptationism, sociobiology, and much of mainstream evolutionary theory in the 1960s and 1970s. The chapter on population genetics, which introduces the mechanisms of evolution (and doesn’t come until page 672!) looks like it was written by Lewontin; it treats, in turn, “genetic equilibrium”, “genetic drift”, “mutation”, “selection”, and “multiple factors”, with no particular emphasis on natural selection. Of course, whether one was a follower of the selection-centric modern evolutionary synthesis or not, Darwin was (and still is) the patron saint of biology:

But in Biology Today, veneration of nature, of the scientific life, and of humanity trumped veneration of Darwin. In the lyrical ten-page illustrated preface from biochemist Albert Szent-Györgyi, there is a passage (one of many) that could never be found in a mainstream biology textbook today, when creationists have turned their energies (in the form of Intelligent Design) to molecular biology, rather than the organismal evolutionary biology that earlier generations of creationists (and evolutionists) focused on. Working his way up through the levels of biological complexity, Szent-Györgyi makes his way to the mind:

“I do not think that the extremely complex speech center of the human brain, involving a network formed by thousands of nerve cells and fibers, was created by random mutations that happened to improve the chances of survival of individuals. I must believe that man built a speech center when he had something to say, and he developed the structure of this center to higher complexity as he had more to say. I cannot accept the notion that this capacity arose through random alterations, relying on the survival of the fittest. I believe that some principle must have guided the development toward the kind of speech center that was needed.”

For both cultural and scientific reasons, that’s not something you would catch many biologists saying today.

Reply to a tweeted link

Clay Shirky tweeted a link to this essay on the future of journalism, from Dan of Xark!. It isn’t accepting my comment, so I’m posting it here:

This is an interesting vision of the future, but I don’t see how it could possibly be the future of journalism.

For the sake of argument, I’ll assume that collecting news data and maintaining a usefully-organized database of it is a viable business model. I agree that it would not be newspapers who led this, but more likely a web-only company.

But newspapers (and to a much lesser extent, television) are the organizations that have an institutional commitment to investigative journalism (the kind that isn’t database-friendly and that is the main thing people fret about losing). Why would a news informatics company, which would lack that institutional commitment, use its profit to subsidize investigative journalism that isn’t itself profitable?

For newspapers, there have been two jobs that only meet economically at the broadest levels: to sell ads, and to create compelling content for readers. Economics didn’t figure in directly in the choice of whether to send a reporter to the court house or fire; rather, that choice was made within the editorial sphere. For news informatics, every choice of coverage has economic implications: which kind of data will people be paying to access? In that environment, in what is sure to be a tough market to establish, would news informatics companies fund investigative journalism out of sheer civic responsibility?

Stanley Fish’s take on science vs. religion

Stanley Fish has a really eloquent column, “God Talk, Part 2“. Nominally about “science vs. religion”, it also speaks to why Wikipedia works and why even for partisans (in politics, in fighting popular pseudoscience or religionism, etc.), really embracing neutral point of view is more effective as a rhetorical strategy than shutting out the views one opposes.

One good bit:

So to sum up, the epistemological critique of religion — it is an inferior way of knowing — is the flip side of a naïve and untenable positivism. And the critique of religion’s content — it’s cotton-candy fluff — is the product of incredible ignorance.

As Fish’s own worldview should make clear, none of this should be taken as a defense of (any particular) religion or a rejection of science. But theological, philosophical and historical arguments have done far more to erode religious authority than scientific ones ever did. The ‘rally the faithful’ approach of Christopher Hitchens and Richard Dawkins does more harm than good.

[thanks @jayrosen_nyu on Twitter for the link]

The Two Cultures, 50 years later

7 May was the 50th anniversary of C. P. Snow‘s famous lecture The Two Cultures. Snow, a novelist who had studied science and held technology related government positions, decried the cultural rift between scientists and literary intellectuals. Snow’s argument, and his sociopolitical agenda, were complex (read the published version if you want the sense of it; educational reform was the biggie), but, especially post-“Science Wars”, the idea of two cultures resonates beyond its original context. The current version of the Wikipedia article says:

The term two cultures has entered the general lexicon as a shorthand for differences between two attitudes. These are

  • The increasingly constructivist world view suffusing the humanities, in which the scientific method is seen as embedded within language and culture; and
  • The scientific viewpoint, in which the observer can still objectively make unbiased and non-culturally embedded observations about nature.

That’s a distinctly 1990s and 2000s perspective.

Snow’s original idea bears only scant resemblance to the scientism vs. constructivism meaning. As he explained, literary intellectuals (not entirely the same thing as humanities scholars) didn’t understand basic science or the technology-based socioeconomic foundations of modern life, and they didn’t care to. Novelists, poets and playwrights, he complained, didn’t know the second law of thermodynamics or what a machine-tool was, and the few that did certainly couldn’t expect their readers to know.

Humanistic studies of science (constructivist worldview and all) would have undermined Snow’s argument, but humanists were only just beginning to turn to science as a subject for analysis. (Kuhn’s Structure of Scientific Revolutions was not until 1962. Structure did mark the acceleration of sociological and humanistic studies of science, but was actually taken up more enthusiastically by scientists than humanists. Widespread constructivism in the humanities only became common by the 1980s, I’d guess, and the main thrust of constructivism, when described without jargon, is actually broadly consistent with the way most scientists today understand the nature of science. It’s not nearly so radical as the popular caricature presented in Higher Superstition and similar polemics.) Rather than humanists understanding the scientific method or scientists viewing their work through a sociological or anthropological lens, Snow’s main complaint was that scientific progress had left the slow-changing world of literature and its Luddite inhabitants behind (and hence, scientists found little use for modern literature).

Snow wrote that “naturally [scientists] had the future in their bones.” That was the core of the scientific culture, and the great failing of literary culture.

Looking back from 2009, I think history–and the point in it when Snow was making his argument–seems very different than it did to Snow. Who, besides scientists, had the future in their bones in 1959? In the 1950s academic world, literature was the pinnacle of ivory tower high culture. Not film, not television, certainly not paraliterary genres like science fiction or comic books. History of science was a minor field that had closer connections to science than to mainstream history.

Today, in addition to scientists, a whole range of others are seen as having “the future in their bones”: purveyors of speculative fiction in every medium; web entrepreneurs and social media gurus; geeks of all sorts; venture capitalists; kids who increasingly demand a role in constructing their (our) own cultural world. The modern humanities are turning their attention to these groups and their historical predecessors. As Shakespeare (we are now quick to note) was the popular entertainment of his day, we now look beyond traditional “literary fiction” to find the important cultural works of more recent decades. And in the popular culture of 1950s through to today, we can see, perhaps, that science was already seeping out much further from the social world of scientsts themselves than Snow and other promoters of the two cultures thesis could recognize–blinded, as they were, by the strict focus on what passed for high literature.

Science permeated much of anglophone culture, but rather than spreading from high culture outward (as Snow hoped it might), it first took hold in culturally marginal niches and only gradually filtered to insulated spheres of high culture. Science fiction historians point to the 1950s as the height of the so-called “Golden Age of [hard] Science Fiction”, and SF authors could count on their audience to understand basic science. Modern geek culture–and its significance across modern entertainment–we now recognize, draws in part from the hacker culture of 1960s computer research. Feminists and the development of the pill; environmentalists; the list of possible examples of science-related futuremaking goes on and on, but Snow had us looking in the wrong places.

Certainly, cultural gaps remain between the sciences and the humanities (although, in terms of scholarly literature, there is a remarkable degree of overlap and interchange, forming one big network with a number of fuzzy division). But C. P. Snow’s The Two Cultures seems less and less relevant for modern society; looking back, it even seems less relevant to its original context.

Rethinking Wikinews

Digital opinion-makers across the blogosphere and the twitterscape been increasingly preoccupied with the rapid decline of the print news industry. Revenues from print circulation and print advertising have both shrunk dramatically, and internet advertising revenues have so far been able to replace only a fraction of that. Newspapers throughout the U.S. are downsizing, some are switching to online-only, and some are simply being shuttered. The question is, what, if anything, will pick up the journalistic slack. (Clay Shirky’s essay, “Newspapers and Thinking the Unthinkable“, is the best thing I’ve seen in this vein, although I would be remiss if I didn’t mention some contrasting viewpoints, such as Dave Winer’s “If you don’t like the news…” and Jason Pontin’s response to Shirky and Winer, “How to Save Media“.)

On its face, Wikinews seems an ideal project to pick up some of that slack. Collaborative software + citizen journalism + brand and community links to Wikipedia…it seems like a formula for success, and yet Wikinews remains a minor project. There are typically only 10 -20 stories per day, most of which are simply summaries of newspaper journalism. Stories with first-hand reporting are published about once every other day, and even many of these rely primarily on the work of professional journalists and have only minor original elements.

Why doesn’t Wikinews have a large, active community? What might a successful Wikinews look like? I have a few ideas.

One reason I write and report for Wikipedia regularly, but only every once in a while for Wikinews, is that writing Wikipedia articles (and writing for the Wikipedia Signpost) feels like being part of something bigger. Everything connects to work that others are doing. I know I’m part of a community working for common goals (more or less). Even if I’m the only contributor to an article, I know there are incoming links to it, that it fits into a broader network. On Wikinews, I can write a story, but it is likely to be one of maybe 20 stories for the day, none of which have much of anything to do with each other.

I went to the Tax Day Tea Party in Hartford, Connecticut with my camera and a notepad. (I put a set of 108 photos on Commons and on Flickr.) Similar protests reportedly took place in about 750 other cities. If there was ever an opportunity for collaborative citizen journalism, this seemed like it. But there was nothing happening on Wikinews, and I didn’t see the point writing a story about one out of hundreds of protests, which wouldn’t even be a legitimate target for a Wikinews callout in the related Wikipedia article.

What I take from this is the importance of organization. Wikinews needs a system for identifying events worth covering before (or while) they happen and recruiting users for specific tasks (e.g., “find out the official police estimate of attendance, photograph and/or record the messages of as many protest signs as possible, and gather some quotes from attendees about why they are protesting”).

My most rewarding experience with Wikinews was a story on the photographic origins of the Obama HOPE poster. It grew out of a comment on the talk page of the poster’s Wikipedia article; the comment appeared while it was on the Main Page as a “Did you know” hook. The lesson here is, in the (alleged) words of Clay Shirky, “go where people are convening online, rather than starting a new place to conveve”. (I think it was unfortunate that Wikinews started as a separate project rather than a “News:” namespace on Wikipedia, but what’s done is done.) There are many places online people gather to discuss and produce news, in addition to Wikipedia; one path to success might be to extend the social boundaries to Wikinews to reach out to existing communities. Although other citizen journalism and special interest communities don’t share the institutional agenda of Wikinews (name, NPOV as a core principle), some members of other communities will be willing to create or adapt their work to be compatible with Wikinews’ requirements. And certain communities actually do share a commitment to neutrality, which raises the possibility of syndication arrangements (in which, e.g., original news reports from a library news automatically get added to the Wikinews database as well).

Shirky and others have argued that some kinds of journalism (in particular, investigative journalism) are not possible without assigning or permitting reporters to develop a story in depth over a long period of time–and these may be the most important kinds of journalism for maintaining a healthy democracy. To some extent, alternative finance models (with public donations like National Public Radio or with endowments like The Huffington Post ) may be filling some of the void left by shrinking newpaper staffs, but it seems unlikely that these models will support anything close to the number of journalists that newspapers do/did.

Wikinews could contribute to investigative journalism in a couple of ways. The simplest is something similar to what Talking Points Memo does–crowdsourcing the analysis of voluminous public documents to identify interesting potential stories. However, as Aaron Swartz recently argued, there are serious limits to what can be gleaned from public documents; as he says, “Transparency is Bunk“.

Another way would be to either fund a core of professionals or collaborate with investigative journalists who work for other non-profits. These professional journalists would–to the extent that it is possible–recruit and manage volunteer Wikinewsies to pursue big stories where the investigative work required is modular enough that part-time amateurs can fruitfully contribute.

In the same vein a professional editor working for Wikinews could be in charge of identifying self-contained reporting opportunities based on geography (e.g., significant political and cultural events) and running an alert system (maybe integrated with the Wikipedia Geonotice system for users who opt in) to let users know what’s happening near them that they could report on. One of the hardest things for a would-be Wikinewsie original reporter is just figuring out what needs covering.

I’m sure there there are a lot of different models for Wikinews that could make it into a successful project. But it’s clear that the current one isn’t working very well.