I want a better social network

Facebook kinda sucks, and it’s not doing much to foster an informed and politically engaged citizenry. It certainly doesn’t help me to be a better citizen. Here’s what a better social network might look like.

Incentives for political engagement

Likes and comments from friends are the main drivers of both the creation of new posts and the spread of content through the newsfeed. I post things because it’s nice to feel liked and loved and to have people interested in what I have to say. Things that inspire strong emoji and pile-on comments are the most likely to earn me likes, and also the most likely to show up in my feed.

Imagine, instead, if local political engagement — showing up to a town council meeting, or calling my state legislator about a bill currently in discussion, or reporting a pothole — was the currency of your social network. I want something like the Sunlight Foundation’s tools in the middle of my online social experience. I want to see what my friends are saying, but also what they’re doing — especially when it’s something I can join in on.

Maybe streaks, like GitHub had?

Whatever the mechanisms, the things that are satisfying and addicting on a better social network should be the things that are also good for people.

Tools for collaboration

Discussions on Facebook, even when it comes to long-term issues of public importance, are ephemeral. There’s no mechanism for communities and networks to build and curate shared knowledge and context.

Local community wikis (like the handful of successful ones on localwiki.org) are still a good idea, they just lack critical mass. They would work if integrated into a better social network.

For non-local things — the quality of news sources, organizations, and everyday consumer issues — something more like aggregate reviews should be part of the system.

No ads

A big, distracting part of my Facebook feed is the ads and promoted stories. These are mostly extra-clickbait-y, ad-heavy versions of the same kinds of clickbait showing up in my feed anyway. More fundamentally, showing ads is what Facebook is designed for. Everything that is done to make it interesting and addicting and useful is ultimately an optimization for ad revenue. When one change user experience change would improve the well-being of users and another lead to 1% more ad impressions, Facebook will take the ad-driven path every time.

A better social network wouldn’t have ads.

Free software that respects privacy

Obviously, being able to get your data out and move it to another host would be a feature of an ideal social network. If the people who run it start doing things against your interests, you should have better alternatives than just signing off and deleting everything.


 

To recap: I want take Facebook, Nextdoor, Sunlight Foundation, Wikipedia, and lib.reviews, smash them all together into a great user experience and an AGPL license, and kill Facebook.

Now is the perfect time to take another shot at it. If there’s anyone working on something like this seriously, sign me up to help.

review of Good Faith Collaboration

Joseph Reagle‘s Good Faith Collaboration: The Culture of Wikipedia is a major step forward for understanding “the free encyclopedia that anyone can edit” and the community that has been building it for the past decade. Based on Reagle’s dissertation, the book takes a broadly humanistic approach to exploring what makes the Wikipedia community tick, combining elements of anthropology, sociology, history, and science & technology studies.

The book opens with an example of how Wikipedia works that turns the famous “Godwin’s law” on its head: unlike the typical Internet discussion where heated argument gives way to accusations of Nazism, Wikipedians are shown rationally and respectfully discussing actual neo-Nazis who have taken an unhealthy interest in Wikipedia. This theme of “laws” carries throughout the book, which treats the official and unofficial norms of Wikipedia while turning repeatedly to the humorous and often ironic “laws of Wikipedia” that contributors have compiled as they tried to come to an understanding of their own community.

Reagle’s first task is to put Wikipedia into historical context. It is only the most recent in a long line of attempts to create a universal encyclopedia. And what Reagle shows, much better than prior, more elementary pre-histories of Wikipedia, is just how much Wikipedia has in common–in terms of aspiration and ideology–with earlier efforts. The “encyclopedic impulse” has run strong in eccentrics dating back centuries. But the real forerunners of Wikipedia come from the late 19th and early 20th centuries: Paul Otlet’s “Universal Bibliographic Repertory” and H.G. Wells’ “World Brain”. Both projects aspired to revolutionize how knowledge was organized and transmitted, with implications far beyond mere education. Just as the Wikimedia Foundation’s mission statement implies–“Imagine a world in which every single human being can freely share in the sum of all knowledge…”–Otlet and Wells saw Utopian potential in their projects. Those efforts were based on new technologies–index cards and microfilm–and each new wave of information technology since then has inspired another attempt at a universal knowledge resource: Project Xanadu, Project Gutenburg, Interpedia, Distributed Encyclopedia, Nupedia, GNUpedia. Wikipedia, Reagle argues, is the inheritor of that tradition.

Next, Reagle sets out to capture the social norms that the Wikipedia community uses as the basis for its communication and collaboration practices. These will be very familiar to Wikipedians, but Reagle does a nice job of explaining the concepts of “neutral point of view” and the call to “assume good faith” when working with other editors, and how these two norms (and related ones) underlay Wikipedia’s collaborative culture. Of course, Reagle readily recognizes that these norms have limits, and one doesn’t have to go far into Wikipedia’s discussion pages to find examples where they break down. But understanding the aspirations of the community in terms of these norms is the first step to an overall picture of how and why Wikipedia works (and, at times, doesn’t work).

Reagle then turns to consider the “openness” of Wikipedia, which is an example of what he calls an “open content community”. Wikipedia’s effort to be the “encyclopedia that anyone can edit” means that inclusiveness creates a continual set of tensions–between productive and unproductive contributors, between autonomy and bureaucracy, between transparency and tendency of minorities to form protected enclaves.

Decisionmaking and leadership on Wikipedia are even bigger challenges than openness. In successive chapters, Reagle examines the concept of “consensus” as practiced by the Wikipedia community and the role that founders Jimmy Wales and Larry Sanger played in setting the early course of the project.

The ideal of consensus was inherited from earlier open technical communities like the Internet Engineering Task Force, whose credo declares “We reject: kings, presidents and voting. We believe in: rough consensus and running code.” But that ideal doesn’t map precisely onto Wikipedia, in part because the “running code” of Wikipedia content isn’t as easy to evaluate as a computer program. Reagle also draws in intriguing comparison between Wikipedia’s still-unsettled notions of consensus and the practices of a more mature consensus-based community: the Quakers. Wikipedia lacks some of the roles and traditions that support decision-making in Quaker groups, and one implication of Reagle’s discussion is that Wikipedians might be able to learn a lot about effective consensus-based governance from the Quakers.

The lasting imprint of Wikipedia’s founders, the “good cop” Wales and the “bad cop” Sanger, has been treated a number of times before. But Reagle’s is the clearest account yet of how the tension between their different ideas for how to structure a voluntary encyclopedia project played out. Especially in the early years of Wikipedia, Wales’ role was primarily focused on maintaining a healthy community and balancing the perspectives of community members, highlighting good ideas and attempting to build consensus rather than promoting his own specific ideas. Even from early on, though, Wales’ role as “benevolent dictator” (or “God-King”, in the negative formulation) was a source of tension. Reagle notes that this tension is a recurring feature in open content communities; even the half-joking titles given to Wales are part of a broader tradition that traces to early online communities.

From my perspective as a Wikipedian–already familiar with norms and much of the short history of Wikipedia–the most powerful part of the book is the discussion of “encyclopedic anxiety”. Reagle argues that reference works have long provoked reactions from broader society that say more about general social unease than the specific virtues and faults of the reference work at hand. Wikipedia is a synecdoche for the changes taking place in information technology and the media landscape, and has served as a reference point for a wide gamut of social critics exploring the faults and virtues of 21st century online culture. That is not to say criticism of Wikipedia is always, or even usually, off-base. But what critics latch onto, and what they don’t, involves the interplay of the reality of Wikipedia and its role as a simultaneous exemplar for many social currents and trends.

Good Faith Collaboration is an enjoyable read, erudite but well-written and straightforward. It will be required reading for anyone serious about understanding Wikipedia.

*disclaimer: I consider Joseph Reagle a friend, and he thanks me in the preface. I read and commented on early versions of parts of the book. At the time of writing this review (October 2010) I also work for the Wikimedia Foundation, the non-profit that runs Wikipedia. But neither of those factors would stop me from being harsh if I thought the book deserved it. The review represents my personal opinion.

Plagiarism and authorship

From a New York Times article, “Plagiarism Lines Blur for Students in Digital Age“:

…these cases — typical ones, according to writing tutors and officials responsible for discipline at the three schools who described the plagiarism — suggest that many students simply do not grasp that using words they did not write is a serious misdeed.

It is a disconnect that is growing in the Internet age as concepts of intellectual property, copyright and originality are under assault in the unbridled exchange of online information, say educators who study plagiarism.

Digital technology makes copying and pasting easy, of course. But that is the least of it. The Internet may also be redefining how students — who came of age with music file-sharing, Wikipedia and Web-linking — understand the concept of authorship and the singularity of any text or image.

Remixing, building on the work of others, collaborating (often anonymously), challenging the very premise of intellectual property… these are all happening.  And yes, the web makes plagiarism easier than ever to conduct (and to discover).  But is student plagiarism really coupled with changing conceptions of authorship?

I haven’t seen much evidence of that.  In the NYT article, I see instead people using plagiarism to attack values and ideas they don’t like.  For example, anthropologist Susan D. Blum, author of My Word!: Plagiarism and College Culture:

She contends that undergraduates are less interested in cultivating a unique and authentic identity — as their 1960s counterparts were — than in trying on many different personas, which the Web enables with social networking.

“If you are not so worried about presenting yourself as absolutely unique, then it’s O.K. if you say other people’s words, it’s O.K. if you say things you don’t believe, it’s O.K. if you write papers you couldn’t care less about because they accomplish the task, which is turning something in and getting a grade,” Ms. Blum said, voicing student attitudes. “And it’s O.K. if you put words out there without getting any credit.”

So plagiarism is a way to cast changing concepts of authorship and originality (and the politics of free culture that go with that) as moral failings.

Them and us: ROFLCon folks and Wikipedians

geeks of a different flavo

This weekend, I went up to Cambridge for ROFLCon II (see my pics).  It was a wonderful, happy, fun, smart conference, and I was really struck by the sense of solidarity among participants, who all consider themselves part of “Internet culture”.

Being part of a culture means drawing lines between “us” and “them”, and whenever Wikipedia was discussed I got the distinct impression that for ROFLCon folks, Wikipedia clearly falls into the category of “them”.  I was one of very few Wikipedians there that I know of (Stuart Geiger was there; I found out that Tim Pierce, a panelist who played a big role in Usenet history, is a Wikipedian; and I saw Benjamin Mako Hill briefly).  That’s not to say that ROFLCon folks don’t like Wikipedia; respect–including respectful criticism–was the dominant tone.  But as one of the Know Your Meme folks lamented in the final panel, “Wikipedia doesn’t care about memes”–and, by extension, a lot of other significant aspects of Internet culture that are not being documented by mainstream sources.  In a lot of ways, especially through policy, Wikipedia explicitly distances itself from Internet culture.

It’s also striking how different the ROFLCon social atmosphere was compared to virtually every Wikipedian gathering I’ve been to.  We–Wikipedians–are, on the whole, geeks of a different flavor.  ROFLCon is a conference of extroverts; Wikipedians tend to be more introverted.  At Wiki Conference New York City last year, one outsider suggested after hanging out with us for a while that maybe one reason for the gender imbalance among Wikipedians is that males are more likely to be aspies–and by implication, that Wikipedians, or at least the ones who come together to share their passion for Wikipedia, don’t seem like neurotypicals.  In my own experience Wikipedian gatherings can be wonderful, they just usually take a while for everyone to get comfortable with each other and start to let their personalities out.  ROFLCon (which at least gave me the impression of being closer to gender-balanced, although I didn’t try to calculate) was a conference of fast friendliness–even for people with rivalries and bad blood between them.

Ben Huh hugs moot, after harsh words
Ben Huh hugs moot after harsh words

silly videos and obscure post-structuralist terms

Evgeny Morozov has a new review of Jaron Lanier’s You Are Not a Gadget, and he spends a fair bit talking about Wikipedia, the touchstone for how the Internet is changing culture.  (Wikipedia researcher Ed Chi offered to review it for the Signpost, but Knopf publicity has so far ignored my every attempt to request a review copy.)  As I understand it, the book is in part an extension of Lanier’s Wikipedia-centered 2006 essay “Digital Maoism: The Hazards of the New Online Collectivism“.  I haven’t read the book, but I trust Morozov’s assessment.  His central point is this:

Technology has penetrated our lives so deeply and so quickly that the only way to make sense of what is happening today requires not only drinking from the anecdotal fire hose that is Twitter, but also being able to contextualise these anecdotes in broader social, historical and cultural settings. But that’s not the kind of analysis that is spitting out of Silicon Valley blogs.

So who should be doing all of this thinking? Unfortunately, Lanier only tells us who should not be doing it: “Technology criticism should not be left to the Luddites”. Statements like this establish Lanier’s own bona fides – as a Silicon Valley maverick unafraid to confront the cyber-utopian establishment from the inside – but they fail to articulate any kind of vision for how to improve our way of discussing technology and its increasingly massive impact on society.

Morozov says that our understanding of the legal dimensions of the Internet have been elucidated by the likes of Zittrain, Lessig and Benkler.  But humanist and social scientists, he says, have let us down in their duty to explore the cultural dimensions of the rise of the networked society, by either ignoring it or relying “obscure post-structuralist terms” that occlude whatever insights they might or might not have.

The overall point, that the academy hasn’t done enough to make itself relevant to ongoing techno-cultural changes, is right on target.  But I think Morozov’s glib dismissal of work in media studies, sociology, anthropology, etc., is unfair to both the main ideas of post-structuralism and the writing skills of the better scholars who do work on technology and culture (Henry Jenkins and Jason Mittell come to mind, but I’m sure there are plenty of others).  Lanier’s epithet of “digital Maoism” is crude red-baiting; I’m not sure whether Morozov’s jargon jibe is red-baiting (post-structuralism being the province of the so-called academic left), he genuinely doesn’t think much of how humanists have analyzed the Internet, or he is just being contrary.

Post-structuralism is complicated (and I don’t pretend to be an expert) but what’s relevant in this context, I think, is (as the Wikipedia article obtusely puts it) the idea of “the signifier and signified as inseparable but not united; meaning itself inheres to the play of difference.”  Put another way, culture (that is, a work of culture) is valuable in whatever ways culture (that is, a culture, a group of people) values it; what matters is not the work itself (and its inherent or intended meaning) but the relationship between a work an its audience.  Related to this is a value judgment about what kinds of culture are better or more worthy of attention: “writerly” works that leave more opportunity for an audience to create its own meanings vs. “readerly” works that are less flexible and open to reinterpretation.  The relevance of these ideas for the Internet’s effects on culture should be obvious: audiences now have ways collaborating in the creation of new meanings and the reinterpretation of cultural works, and can often interact not only with authors work, but with the authors themselves (thereby influencing later works).

So when Lanier sneers at ‘silly videos’ and Morozov complains that Lessig doesn’t address “whether the shift to the remix culture as a primary form of cultural production would be good for society”, I can’t help but see it as the crux of a straw man argument.  You would have us give up our current system that creates such wonderful culture (left helpfully unspecified, since there’s no accounting for taste) in exchange for remixed YouTube tripe? But humanists are starting to place more value in the capital intensive products of the culture industry precisely because of the way that audiences can remix them and reuse them and create meanings from them.

YOYOW vs. privacy and anonymity

Laura DeNardis, in her presentation for the “Technologies of Dissent” panel at the Access to Knowledge and Human Rights conference today, illustrated the dangers of too much openness and access to certain kinds of knowledge by pointing to eightmaps.com, a mashup of Google Maps and donor data for the Prop 8 anti-same-sex-marriage campaign in California: you can find out right where these donors live in San Francisco.

Later in the panel, Eddan Katz of the Electronic Frontier Foundation was emphasizing the virtues of online anonymity for facilitating free expression and dissent (with EFF’s Tor software, for example).

Obviously, most people at this conference think Prop 8 is a bad thing while anonymous communication between dissidents in places with oppressive and censorious governments is a good thing.  But is there a principled argument that eightmaps.com is good and legitimate and those Prop 8 donors ought not be able to hide from the public, while dissidents in Iran or China ought to be able to organize and speak out and push for their favored kinds of political change behind the cloak of anonymity?

It’s the tension, as panelist Anupam Chander explained it, between the Foulcault and the Habermas versions of the Internet’s potential: universal panopticon surveillance state vs. universal public sphere for rational discourse.

My own view is that there’s a balance to be struck between the classic net principle of YOYOW (“You Own Your Own Words” meaning both that you can say what you want to say and you are responsible for what you say) and the right to speak anonymously.  The balance (one of the driving tensions in the history of the Wikipedia community, incidentally) is essentially the question of  the limits of anonymous speech and action.

(Shooting from the hip here) I suggest a rule of thumb: the closer the political environment approximates an ideal Habermasian public sphere, the stronger the imperative that that people own their own words when they choose to engage in public discourse.  Likewise, the more limits on what people are allowed to say, the more right they have to engage in a wider variety of anonymous speech and action.  (For speech that is not intended to be part of the public sphere, things are quite different and there is more of an argument for privacy and anonymity.)

[A summary of the whole panel is up on the Yale ISP blog: A2K4 Panel II: Technologies of Dissent: Information and Expression in a Digital World]

“Minds for Sale” (or, “Clickworkers of the world, unite!”)

This recent lecture by Jonathan Zittrain is long, but well worth it.  It’s about various forms of crowdsourcing and clickwork, and their scary potential for exploitation, political manipulation, political repression, and other bad stuff, related to what I’ve blogged about Demand Media vs. Wikimedia and the psychology of fun and games.

The send-up of Wikipedians and why Wikipedia isn’t on Subvert and Profit is kinda cute at 39:20.

Wikipedia in theory (Marxist edition)

The zeroeth law of Wikipedia states: “The problem with Wikipedia is that it only works in practice. In theory, it can never work.”

That’s largely true of the kinds of theory that are most closely related to the hacker-centric early Wikipedia community: analytical philosophy, epistemology, and other offshoots of positive philosophy–the kinds of theory most closely related to the cultures of math and science.  (See my earlier post on “Wikipedia in theory“.)  But there’s another body of theory in which Wikipedia’s success can make a lot of sense: Marxism and its successors (“critical theory”, or simply “Theory”).

A fantastic post on Greg Allen’s Daddy Types blog, “The Triumph of the Crayolatariat“, reminded me (indirectly) of how powerful Marxist concepts can be for understanding Wikipedia and the free software and free culture movements more broadly.

It’s a core principle of post-industrial political economy that knowledge is not just a product created by economic and cultural activity, but a key part of the means of production (i.e., cultural capital).  Software, patentable ideas, and copyrighted content of all sorts are the basis for a wide variety of production.  Software is used to create more software as well as visual art, fiction, music, scientific knowledge, journalism, etc.  (See “Copyleft vs. Copyright: A Marxist Critique“, Johan Söderberg, First Monday.) And all those things are inputs into the production of new cultural products.  The idea of “remix culture” that Larry Lessig has been promoting recently emphasizes that in the digital realm, there’s no clear distinction between cultural products and means of cultural production; art builds on art.  (Lessig, however, has resisted associations between the Creative Commons cultural agenda and the Marxist tradition, an attitude that has brought attacks from the left, e.g., the Libre Society.)

Modern intellectual property regimes are designed to turn non-material means of production into things that can be owned.  And the free software and free culture movements are about collective ownership of those means of production.

Also implicit in the free culture movement’s celebration of participatory culture and user-generated content (see my post on “LOLcats as Soulcraft“) is the set of arguments advanced by later theorists about the commodification of culture.  A society that consumes the products of a culture industry is very different from one in which produces and consumers of cultural content are the same people–even if the cultural content created was the same (which of course would not be the case).

What can a Marxist viewpoint tell us about where Wikimedia and free culture can or should go from here? One possibility is online “social networking”.  The Wikimedia community, and until recently even the free software movement, hasn’t paid much attention to social networking or offered serious competition to the proprietary sites like Facebook, MySpace, Twitter, etc.  But if current agenda is about providing access to digital cultural capital (i.e., knowledge and other intellectual works), the next logical step is to provide freer, more egalitarian access to social capital as well.    Facebook, MySpace and other services do this to some extent, but they are structured as vehicles for advertising and the furtherance of consumer culture, and in fact are more focused on commoditizing the social capital users bring into the system than helping users generate new social capital.  (Thus, many people have noted that “social networking sites” is a misnomer for most of those services, since they are really about reinforcing existing social networks,  not creating new connections.)

The Wikimedia community, in particular, has taken a dim view of anything that smacks of mere social networking (or worse, MMORPGs), as if cultural capital is important but social capital is not.  But from a Marxist perspective, it’s easier to see how intertwined the two are and how both are necessary to maintain a healthy free culture ecosystem.

Wikimedia and the rest of the free culture community, then, ought to get serious about supporting OpenMicroBlogging (the identi.ca protocol) and other existing alternatives to proprietary social networking and culture sites, and even perhaps starting a competitor to MySpace and Facebook.  (See some of the proposals I’m supporting on Wikimedia Strategic Planning wiki in this vein.)

Laugh-Out-Loud Cats #1090


Laugh-Out-Loud Cats #1090, originally uploaded by Ape Lad. Creative Commons-Attribution-Noncommercial-NoDerivatives 2.0

Yesterday, the new Laugh-Out-Loud Cats Wikipedia article appeared in Did you know (with a freely licensed example that the artist made available specifically for Wikipedia.)

Today, Ape Lad posted this gem. For those unfamiliar with the Laugh-Out-Loud Cats, he provides the following…
“Context: the big one loathes ducks!”

Wikipedia’s search engine dominance = informational homogeneity?

Nicholas Carr (of “Is Google Making Us Stupid?” fame) is a consistent source of thought-provoking but (in my view) off-base critiques of the information age in general and Wikipedia in particular. He has an interesting post on the Britannica Blog, “All Hail the Information Triuvirate“. This coincides with Britannica’s roll out of new features to invite readers to suggest improvements, and some of the usual impotent snipes from Robert McHenry and other Britannica editors. Wikipedia gets 97% of all encyclopedia traffic on the Internet, so they have little to do but whine about the culture that let this happen and/or try to learn from Wikipedia’s success.

A favorite tactic of Wikipedia critics is to bemoan Wikipedia’s search engine success. Carr demonstrates Wikipedia’s dominance of results from the most popular search engine (Google), showing that for ten diverse searches that he first ran in August 2006, then again in December 2007, and again this month, Wikipedia articles rose from an average of placement of #4 to being the #1 hit for all ten searches. Carr “wonder[s] if the transformation of the Net from a radically heterogeneous information source to a radically homogeneous one is a good thing” and has difficulty imagining “that Wikipedia articles are actually the very best source of information for all of the many thousands of topics on which they now appear as the top Google search result.” But this rings shallow without examples (say, for any of his ten searches) of what single web pages would be better starting points.

The idea that the Net has become “radically homogeneous” just because Wikipedia is often the first Google hit is absurd. Wikipedia itself is far from homogeneous, and indeed its great strength is the way it brings together the good parts of many of the other sources of information on the Internet (and beyond). Carr’s implication seems to be that without Wikipedia (the “path of least resistance” for information delivery) search results would be better and finding valuable web content would be easier.

Carr seems to conceive of Wikipedia as a filter placed over Google that lets through only a homogeneous mediocrity. Wikipedia is better thought of as refined version of Google’s method of harnessing the heterogeneity of the Internet; where Google relies on a purely mechanical process, Wikipedia brings together sources with consideration of the individual topic at hand and human evaluation of the importance and reliability of each source.