“they didn’t belong to us at Pixar anymore”

We picked up some Toy Story toys at a garage sale this weekend, which have become the center of Brighton’s life for the time being.

John Lasseter, director of Toy Story, has a great story about how, five days after the movie came out and audiences started falling in love with it, he

realized that Woody, Buzz Lightyear, all the Toy Story characters… they didn’t belong to us at Pixar anymore,

but to the people who had made those characters a part of their own lives.

Of course, the lawyers at Pixar will tell you a very different story.

Plagiarism and authorship

From a New York Times article, “Plagiarism Lines Blur for Students in Digital Age“:

…these cases — typical ones, according to writing tutors and officials responsible for discipline at the three schools who described the plagiarism — suggest that many students simply do not grasp that using words they did not write is a serious misdeed.

It is a disconnect that is growing in the Internet age as concepts of intellectual property, copyright and originality are under assault in the unbridled exchange of online information, say educators who study plagiarism.

Digital technology makes copying and pasting easy, of course. But that is the least of it. The Internet may also be redefining how students — who came of age with music file-sharing, Wikipedia and Web-linking — understand the concept of authorship and the singularity of any text or image.

Remixing, building on the work of others, collaborating (often anonymously), challenging the very premise of intellectual property… these are all happening.  And yes, the web makes plagiarism easier than ever to conduct (and to discover).  But is student plagiarism really coupled with changing conceptions of authorship?

I haven’t seen much evidence of that.  In the NYT article, I see instead people using plagiarism to attack values and ideas they don’t like.  For example, anthropologist Susan D. Blum, author of My Word!: Plagiarism and College Culture:

She contends that undergraduates are less interested in cultivating a unique and authentic identity — as their 1960s counterparts were — than in trying on many different personas, which the Web enables with social networking.

“If you are not so worried about presenting yourself as absolutely unique, then it’s O.K. if you say other people’s words, it’s O.K. if you say things you don’t believe, it’s O.K. if you write papers you couldn’t care less about because they accomplish the task, which is turning something in and getting a grade,” Ms. Blum said, voicing student attitudes. “And it’s O.K. if you put words out there without getting any credit.”

So plagiarism is a way to cast changing concepts of authorship and originality (and the politics of free culture that go with that) as moral failings.

silly videos and obscure post-structuralist terms

Evgeny Morozov has a new review of Jaron Lanier’s You Are Not a Gadget, and he spends a fair bit talking about Wikipedia, the touchstone for how the Internet is changing culture.  (Wikipedia researcher Ed Chi offered to review it for the Signpost, but Knopf publicity has so far ignored my every attempt to request a review copy.)  As I understand it, the book is in part an extension of Lanier’s Wikipedia-centered 2006 essay “Digital Maoism: The Hazards of the New Online Collectivism“.  I haven’t read the book, but I trust Morozov’s assessment.  His central point is this:

Technology has penetrated our lives so deeply and so quickly that the only way to make sense of what is happening today requires not only drinking from the anecdotal fire hose that is Twitter, but also being able to contextualise these anecdotes in broader social, historical and cultural settings. But that’s not the kind of analysis that is spitting out of Silicon Valley blogs.

So who should be doing all of this thinking? Unfortunately, Lanier only tells us who should not be doing it: “Technology criticism should not be left to the Luddites”. Statements like this establish Lanier’s own bona fides – as a Silicon Valley maverick unafraid to confront the cyber-utopian establishment from the inside – but they fail to articulate any kind of vision for how to improve our way of discussing technology and its increasingly massive impact on society.

Morozov says that our understanding of the legal dimensions of the Internet have been elucidated by the likes of Zittrain, Lessig and Benkler.  But humanist and social scientists, he says, have let us down in their duty to explore the cultural dimensions of the rise of the networked society, by either ignoring it or relying “obscure post-structuralist terms” that occlude whatever insights they might or might not have.

The overall point, that the academy hasn’t done enough to make itself relevant to ongoing techno-cultural changes, is right on target.  But I think Morozov’s glib dismissal of work in media studies, sociology, anthropology, etc., is unfair to both the main ideas of post-structuralism and the writing skills of the better scholars who do work on technology and culture (Henry Jenkins and Jason Mittell come to mind, but I’m sure there are plenty of others).  Lanier’s epithet of “digital Maoism” is crude red-baiting; I’m not sure whether Morozov’s jargon jibe is red-baiting (post-structuralism being the province of the so-called academic left), he genuinely doesn’t think much of how humanists have analyzed the Internet, or he is just being contrary.

Post-structuralism is complicated (and I don’t pretend to be an expert) but what’s relevant in this context, I think, is (as the Wikipedia article obtusely puts it) the idea of “the signifier and signified as inseparable but not united; meaning itself inheres to the play of difference.”  Put another way, culture (that is, a work of culture) is valuable in whatever ways culture (that is, a culture, a group of people) values it; what matters is not the work itself (and its inherent or intended meaning) but the relationship between a work an its audience.  Related to this is a value judgment about what kinds of culture are better or more worthy of attention: “writerly” works that leave more opportunity for an audience to create its own meanings vs. “readerly” works that are less flexible and open to reinterpretation.  The relevance of these ideas for the Internet’s effects on culture should be obvious: audiences now have ways collaborating in the creation of new meanings and the reinterpretation of cultural works, and can often interact not only with authors work, but with the authors themselves (thereby influencing later works).

So when Lanier sneers at ‘silly videos’ and Morozov complains that Lessig doesn’t address “whether the shift to the remix culture as a primary form of cultural production would be good for society”, I can’t help but see it as the crux of a straw man argument.  You would have us give up our current system that creates such wonderful culture (left helpfully unspecified, since there’s no accounting for taste) in exchange for remixed YouTube tripe? But humanists are starting to place more value in the capital intensive products of the culture industry precisely because of the way that audiences can remix them and reuse them and create meanings from them.

Wikipedia in theory (Marxist edition)

The zeroeth law of Wikipedia states: “The problem with Wikipedia is that it only works in practice. In theory, it can never work.”

That’s largely true of the kinds of theory that are most closely related to the hacker-centric early Wikipedia community: analytical philosophy, epistemology, and other offshoots of positive philosophy–the kinds of theory most closely related to the cultures of math and science.  (See my earlier post on “Wikipedia in theory“.)  But there’s another body of theory in which Wikipedia’s success can make a lot of sense: Marxism and its successors (“critical theory”, or simply “Theory”).

A fantastic post on Greg Allen’s Daddy Types blog, “The Triumph of the Crayolatariat“, reminded me (indirectly) of how powerful Marxist concepts can be for understanding Wikipedia and the free software and free culture movements more broadly.

It’s a core principle of post-industrial political economy that knowledge is not just a product created by economic and cultural activity, but a key part of the means of production (i.e., cultural capital).  Software, patentable ideas, and copyrighted content of all sorts are the basis for a wide variety of production.  Software is used to create more software as well as visual art, fiction, music, scientific knowledge, journalism, etc.  (See “Copyleft vs. Copyright: A Marxist Critique“, Johan Söderberg, First Monday.) And all those things are inputs into the production of new cultural products.  The idea of “remix culture” that Larry Lessig has been promoting recently emphasizes that in the digital realm, there’s no clear distinction between cultural products and means of cultural production; art builds on art.  (Lessig, however, has resisted associations between the Creative Commons cultural agenda and the Marxist tradition, an attitude that has brought attacks from the left, e.g., the Libre Society.)

Modern intellectual property regimes are designed to turn non-material means of production into things that can be owned.  And the free software and free culture movements are about collective ownership of those means of production.

Also implicit in the free culture movement’s celebration of participatory culture and user-generated content (see my post on “LOLcats as Soulcraft“) is the set of arguments advanced by later theorists about the commodification of culture.  A society that consumes the products of a culture industry is very different from one in which produces and consumers of cultural content are the same people–even if the cultural content created was the same (which of course would not be the case).

What can a Marxist viewpoint tell us about where Wikimedia and free culture can or should go from here? One possibility is online “social networking”.  The Wikimedia community, and until recently even the free software movement, hasn’t paid much attention to social networking or offered serious competition to the proprietary sites like Facebook, MySpace, Twitter, etc.  But if current agenda is about providing access to digital cultural capital (i.e., knowledge and other intellectual works), the next logical step is to provide freer, more egalitarian access to social capital as well.    Facebook, MySpace and other services do this to some extent, but they are structured as vehicles for advertising and the furtherance of consumer culture, and in fact are more focused on commoditizing the social capital users bring into the system than helping users generate new social capital.  (Thus, many people have noted that “social networking sites” is a misnomer for most of those services, since they are really about reinforcing existing social networks,  not creating new connections.)

The Wikimedia community, in particular, has taken a dim view of anything that smacks of mere social networking (or worse, MMORPGs), as if cultural capital is important but social capital is not.  But from a Marxist perspective, it’s easier to see how intertwined the two are and how both are necessary to maintain a healthy free culture ecosystem.

Wikimedia and the rest of the free culture community, then, ought to get serious about supporting OpenMicroBlogging (the identi.ca protocol) and other existing alternatives to proprietary social networking and culture sites, and even perhaps starting a competitor to MySpace and Facebook.  (See some of the proposals I’m supporting on Wikimedia Strategic Planning wiki in this vein.)

If all content is just data, what does that mean for quality television?

Why AT&T Killed Google Voice” by Andy Kessler in the Wall Street Journal is an insightful piece that’s been making the rounds lately. It’s worth reading. I’ll wait until you’re done.

The basic principle is that old media delivery companies–phone companies and cable TV–are trying as hard as they can to hold back universal fungibility of data pipes. TV and voice streams are just data, but cable and phone companies can charge a whole lot more for those services than they can for pushing the equivalent generic bits over the network.

I agree with most of the article, but I’m worried about the implications for TV. Today we are seeing a lot of really great television being made, subsidized by the station model that aggregates a wide array of content for a single station and then further aggregates a set of stations into a standard subscription package. So under this model, HBO can make a high-caliber show like The Wire–reported to be a money loser in terms of viewership and direct sales–and still be happy to make similar shows that build the network’s reputation. Cheaper shows make more immediate financial sense, but shows like The Wire are loss leaders for stations (or packages of stations).

The current digital crisis of the news business–disaggregation of unprofitable journalism and profitable miscellanea–is going to hit TV sooner or later. Disaggregation of TV content might make it harder to make great complex serial television (although we’d be paying less for it).

On the other hand, it might make it easier to mobilize audiences to finance really great projects. To Fox, Firefly‘s set of rabidly dedicated fans were no valuable than the same number of wishy-washy viewers of some lesser show (less so in fact, if they represented a demographic that brought lower advertising prices). There was no way to translate the intensity of the fans’ devotion into enough revenue to justify continuing the show. In a world of disaggregated TV, things might have turned out differently, with higher prices compensating for smaller audiences.

Then again, the movie industry relies by choice solely on audience size, with tickets to each movie the same price (varying by theater, but not by movie), and the blockbusters are rarely very good.

The Two Cultures, 50 years later

7 May was the 50th anniversary of C. P. Snow‘s famous lecture The Two Cultures. Snow, a novelist who had studied science and held technology related government positions, decried the cultural rift between scientists and literary intellectuals. Snow’s argument, and his sociopolitical agenda, were complex (read the published version if you want the sense of it; educational reform was the biggie), but, especially post-“Science Wars”, the idea of two cultures resonates beyond its original context. The current version of the Wikipedia article says:

The term two cultures has entered the general lexicon as a shorthand for differences between two attitudes. These are

  • The increasingly constructivist world view suffusing the humanities, in which the scientific method is seen as embedded within language and culture; and
  • The scientific viewpoint, in which the observer can still objectively make unbiased and non-culturally embedded observations about nature.

That’s a distinctly 1990s and 2000s perspective.

Snow’s original idea bears only scant resemblance to the scientism vs. constructivism meaning. As he explained, literary intellectuals (not entirely the same thing as humanities scholars) didn’t understand basic science or the technology-based socioeconomic foundations of modern life, and they didn’t care to. Novelists, poets and playwrights, he complained, didn’t know the second law of thermodynamics or what a machine-tool was, and the few that did certainly couldn’t expect their readers to know.

Humanistic studies of science (constructivist worldview and all) would have undermined Snow’s argument, but humanists were only just beginning to turn to science as a subject for analysis. (Kuhn’s Structure of Scientific Revolutions was not until 1962. Structure did mark the acceleration of sociological and humanistic studies of science, but was actually taken up more enthusiastically by scientists than humanists. Widespread constructivism in the humanities only became common by the 1980s, I’d guess, and the main thrust of constructivism, when described without jargon, is actually broadly consistent with the way most scientists today understand the nature of science. It’s not nearly so radical as the popular caricature presented in Higher Superstition and similar polemics.) Rather than humanists understanding the scientific method or scientists viewing their work through a sociological or anthropological lens, Snow’s main complaint was that scientific progress had left the slow-changing world of literature and its Luddite inhabitants behind (and hence, scientists found little use for modern literature).

Snow wrote that “naturally [scientists] had the future in their bones.” That was the core of the scientific culture, and the great failing of literary culture.

Looking back from 2009, I think history–and the point in it when Snow was making his argument–seems very different than it did to Snow. Who, besides scientists, had the future in their bones in 1959? In the 1950s academic world, literature was the pinnacle of ivory tower high culture. Not film, not television, certainly not paraliterary genres like science fiction or comic books. History of science was a minor field that had closer connections to science than to mainstream history.

Today, in addition to scientists, a whole range of others are seen as having “the future in their bones”: purveyors of speculative fiction in every medium; web entrepreneurs and social media gurus; geeks of all sorts; venture capitalists; kids who increasingly demand a role in constructing their (our) own cultural world. The modern humanities are turning their attention to these groups and their historical predecessors. As Shakespeare (we are now quick to note) was the popular entertainment of his day, we now look beyond traditional “literary fiction” to find the important cultural works of more recent decades. And in the popular culture of 1950s through to today, we can see, perhaps, that science was already seeping out much further from the social world of scientsts themselves than Snow and other promoters of the two cultures thesis could recognize–blinded, as they were, by the strict focus on what passed for high literature.

Science permeated much of anglophone culture, but rather than spreading from high culture outward (as Snow hoped it might), it first took hold in culturally marginal niches and only gradually filtered to insulated spheres of high culture. Science fiction historians point to the 1950s as the height of the so-called “Golden Age of [hard] Science Fiction”, and SF authors could count on their audience to understand basic science. Modern geek culture–and its significance across modern entertainment–we now recognize, draws in part from the hacker culture of 1960s computer research. Feminists and the development of the pill; environmentalists; the list of possible examples of science-related futuremaking goes on and on, but Snow had us looking in the wrong places.

Certainly, cultural gaps remain between the sciences and the humanities (although, in terms of scholarly literature, there is a remarkable degree of overlap and interchange, forming one big network with a number of fuzzy division). But C. P. Snow’s The Two Cultures seems less and less relevant for modern society; looking back, it even seems less relevant to its original context.

Neal Stephenson on 300

(The) Neal Stephenson has a great op-ed about 300 in the New York Times. He has a lot to say about the changing currents of culture and the recent history of science fiction, but his defense of 300 against the poor critical reception (although David Edelstein hasn’t yet produced a review, which is generally the only one I even think of taking seriously) is superb, and could well be about any number of the good things about modern popular culture: “These [geeks] don’t need irony or campiness self-consciously pointed out to them, any more than they need a laugh track to enjoy “The Simpsons.””

On that note, I’m off to see 300. Hopefully, it won’t turn me into a xenophobic Persian-hater.

Cultural change in the modern world

My manifesto post got picked up by OU’s patahistorian David Davisson for the latest History Carnival. From there, I happened upon a Crooked Timber post by John Quiggin on “the traditionality of modernity,” a clever way of saying that, contrary to common historical intuition, cultural change is slowing down… and fast.

In a nutshell, technology-induced mass/global culture tends to make major cultural changes less, not more common. Elements of this include:

  • The standarization of written language following the printing press, a trend that is rapidly become panlingual (“it’s expected that during the 21st century the number of language in the world will go from 6,000 to 300”).
  • The permanent fixation of/on the foundational pop culture icons like Marilyn Monroe or The Beatles (a dubious contention, but maybe “Marilyn will, inevitably, fade, but never be replaced on her pedestal”).
  • Globalization, reification and simplication of many previously local traditions: styles of food, artforms, forms of national government (or the beginning of the end thereof, with the EU and global economic institutions).

I’m still not sure how much of this I buy as a general statement, but some of it at least is true, and some of it is lamentable. Whatever truth there is to this technology-leads-to-cultural-hegemony thesis, it’s obviously somewhat more complex, and I think somewhat more positive, than the general tone of discussion at Crooked Timber. I won’t particularly mourn the death of 5,700 languages, despite whatever profoundly different ways of thinking such languages might or might not enable. There are more than enough socially constructed boundaries of thought to hamper communication and exchange (e.g., academic disciplines, nationality) , and subcultures proliferate mightily in the modern world, providing ample breeding ground for new ideas and traditions while retaining the ability to swiftly reconnect to mainstream culture (or other subcultures) when necessary.

My course with Jean-Cristophe Agnew ( The American Century, 1941-1961 ) has been great, and it provides a jumping-off point for assessing this cultural hegemony idea. The premise of the course, which I’m increasingly convinced of, is that those two decades (give or take a few years) formed the basis of American culture since that time; nearly all the significant shifts of the later 20th century had their origins then and cultural events from the period are still frequently relevant today. This period, along with the turn-of-the-century rise of the even-nebulous “modernity” (which I studied with Ole Molvig last semester, incidentally) were singled out in the Crooked Timber discussion as periods when it seemed cultural change was especially rapid compared to today, and I would generally agree.

But I also think we’re seeing the beginning of the reversal or supersession of the homogenizing trends in American culture that have been in play since the 60s. Widespread television broadcasting and the other biproducts of defense research from WWI and WWII are finally being overtaken in cultural significance by the Cold War research legacy of computers. Along with this comes “the long tail,” the massive diversification of cultural products that is just beginning. The hit for music and the blockbuster for movies (the things that make radio and theaters so lame today) are both dying economic modes; they’re being replaced by niche-centric media such as digital music stores, Netflix (which apparently has a superb recommendation system that facilitates discovery of movies both new and old that escape mainstream attention), and other “new economy”-style retailers that make niche-content profitable again.

Mainstream media is not likely to die completely, and its current troubles only make it even more homogenous and derivative… witness current trend of mergers in news agencies, the fact that half the shows on network TV are Law and Order spinoffs (some day I’ll write a post about the pernicious political effect those shows must have), and the fact that the only truly good blockbuster from last year was not from Hollywood, and even it followed the current formula of sticking to established franchises and/or well-worn classic plots. (Neo-noir comic book male-fantasy shoot-em-up with computer graphics… seemingly the least original movie possible.) But again, I see some silver lining to retaining and even enhancing a cultural baseline as a backdrop for the vibrant long tail of culture. The key is to improve that cultural baseline (the point of my recent manifesto), but I think there is more hope for that project now than at any time since the rise Cold War culture. The fact that these issues regarding the interplay technology and culture are becoming visible means we needn’t feel trapped by any technological determinism; now is the time to determine the shape of mass culture for the next century.

This is, of course, a very modernocentric (is there better word this?) view. What about all the full-blown culture(s) being obliterated by the shift to modernity? I don’t know how to answer that… I’ve never been too enthralled by anthropology and the idea of culture for culture’s sake. The modern/post-modern long tail world will make it easier preserve parts of traditional culture, but transitions to modernity will still entail a lot of suffering; the results of the current world picture look a somewhat more promising than the fruits of 50s and 60s modernization theory, even though not much has fundamentally changed (besides the end of the Cold War).

Alas, that’s probably enough of an incoherent rant for one night.

Narrative history vs. Insightful history, Time and Space

As much as I like John Demos’s Narrative History class (and as much as I’m learning about writing and style), I’ve come to realize that I have neither the desire nor the knack to be a narrative historian. Frankly, the more narrative, engaging, engrossing, lyrical the prose has been in the class (particularly the short essays my classmates and I have written), the less the content could possibly be historically interesting (according to my definition of interesting, of course). This week we wrote papers on 9/11, and the other paper were all very nicely written; some of them were really very much better than basically anything you would find in an academic work. Better than the narrative history books we’ve read so far, I thought. But you also would not find those ones in an academic work.

[Thanks go to the Subtle Doctor for his report on my classmates.]

For next week, our writing topic is totally open; we’re expected to apply these narrative methods we’ve been practicing to something in our own sphere of interest/knowledge. I haven’t actually done any research (e.g., the institutional history of Yale’s various biology departments or G. E. Hutchinson’s letters of recommendation) that involves a compelling story, so I’m going to have to basically retell a history of science story I’m familiar with [note: prepositions are for ending sentences with, no matter what Prof. Demos says]. But looking over my bookshelves, full of science stories I like so much, I find it hard to think of one I could retell with conviction, without explicit analysis. I’m afraid it will turn into one of those scientist-as-hero stories, the fight against which is exactly what makes history of science so interesting.

Meanwhile, I’m currently reading Stephen Kern’s The Culture of Time and Space for Ole Molvig’s class. We read a small part of it last semester for the Intro to History of Science class, and I found very little value in it; it tries to make massive connections across turn-of-the-century culture (1880-1918, precisely), incorporating art, literature, philosophy, science, technology, and whatever else Kern could find into a very loose framework analyzing how people experienced the concepts of time and space. One criticism we had was: it was so broad, but every time it touched on something we knew it seemed particularly weak, making the rest with which we were less familiar suspect as well. But starting from the beginning (and reading his circumspect introduction where he acknowledges the limits of his approach), I like it much better. Mainly because it’s well-written and it flows. Even if the broad connections are very weak and contingent on the sources he chose to include and not include (and they are), it does a great job giving an overview of how a relatively small canon of cultural figures fit into the emerging culture of modernity, and approaches them from an interesting (particularly for a historian of science) thematic perspective. It has neither the virtues of narrative prose nor the strengths of thesis-driven argument, but it’s a compelling presentation nonetheless.