Plagiarism and authorship

From a New York Times article, “Plagiarism Lines Blur for Students in Digital Age“:

…these cases — typical ones, according to writing tutors and officials responsible for discipline at the three schools who described the plagiarism — suggest that many students simply do not grasp that using words they did not write is a serious misdeed.

It is a disconnect that is growing in the Internet age as concepts of intellectual property, copyright and originality are under assault in the unbridled exchange of online information, say educators who study plagiarism.

Digital technology makes copying and pasting easy, of course. But that is the least of it. The Internet may also be redefining how students — who came of age with music file-sharing, Wikipedia and Web-linking — understand the concept of authorship and the singularity of any text or image.

Remixing, building on the work of others, collaborating (often anonymously), challenging the very premise of intellectual property… these are all happening.  And yes, the web makes plagiarism easier than ever to conduct (and to discover).  But is student plagiarism really coupled with changing conceptions of authorship?

I haven’t seen much evidence of that.  In the NYT article, I see instead people using plagiarism to attack values and ideas they don’t like.  For example, anthropologist Susan D. Blum, author of My Word!: Plagiarism and College Culture:

She contends that undergraduates are less interested in cultivating a unique and authentic identity — as their 1960s counterparts were — than in trying on many different personas, which the Web enables with social networking.

“If you are not so worried about presenting yourself as absolutely unique, then it’s O.K. if you say other people’s words, it’s O.K. if you say things you don’t believe, it’s O.K. if you write papers you couldn’t care less about because they accomplish the task, which is turning something in and getting a grade,” Ms. Blum said, voicing student attitudes. “And it’s O.K. if you put words out there without getting any credit.”

So plagiarism is a way to cast changing concepts of authorship and originality (and the politics of free culture that go with that) as moral failings.

Stanley Fish and saving the world one book at a time

Stanley Fish has a challenging column in Sunday’s New York Times: “Neoliberalism and Higher Education“. As the contents of Cliopatria (my new blogging home away from home, now that Revise & Dissent is being shuttered), and indeed much of the academic blogosphere, attest, the trend of market approaches to the running of universities is on a lot of minds.

Fish’s own philosophy of the academy is largely orthogonal to neoliberalism: he exhorts academics to “stick to your academic knitting”, to “do your job and don’t try to do someone else’s”, and to leave off “trying to fashion a democratic citizenry or save the world”. Critics of neoliberalism, naturally, see such a perspective as backing up the power of university administrators (i.e., furthering neoliberalism in the academy). But Fish has also argued that “To the question ‘of what use are the humanities?’, the only honest answer is none whatsoever”, that the humanities (including his own field of literary theory) are intrinsically worthwhile but will not contribute to the saving of the world or other political ends. That is not a persperctive that meshes well with the instrumental approach of neoliberalism.

As I explained in my first post to the now-defunct Revise & Dissent, my view is something along the lines of: if you’re not trying to save the world, what’s the point? Nevertheless, I mostly agree with Fish when he says we should not (in the name of academic freedom) erase the distinction between political action and scholarship (much less teaching). How, then, ought academics try to save the world? The most viable approach, I think, is through careful choice of what topics to apply the methods of ones discipline to.

Take the work of historian of science Steven Shapin. In The Scientific Life: A Moral History of a Late Modern Vocation (2008), Shapin explores the complex ideas of what it was (and is) to a be a scientist in the modern world. Despite media images where the academic scientist predominates, most scientists in the U.S. have been working in industry since the rise of the military-industrial complex in the 1940s and 1950s (and a large proportion were doing so even at the beginning of the 20th century). But the working life of the industry scientist is hardly the caricature of scientific management (squashing out the creativity and freedom that is a natural part of science) that has been circulating at least since the work of Robert K. Merton.

Although it’s not explicit in the book, Shapin’s work is a response to the trend of running universities like businesses. Successful businesses that revolve around original inquiry and research, Shapin shows, are a lot more like universities (pre-scientific management) than is generally appreciated. The implication is that, if universities are to be patterned after businesses, the appropriate examples within the world of business (as opposed to distorted ideas of business research that adminstrators might have) are actually not so foreign to the cherished culture of universities that opponents of neoliberalism in higher education seek to defend.

In his preface, in defense of his tendency in much of his historical work to address “the way we live now”, Shapin says this:

“I take for granted three things that many historians seem to find, to some degree, incompatible: (1) that historians should commit themselves to writing about the past, as it really was, and that the institutional intention of history writing must embrace such a commitment; (2) that we inevitably write about the past as an expression of present concerns, and that we have no choice in this matter; and (3) that we can write about the past to find out about how it came to be that we live as we now do, and, indeed, for giving better descriptions of the way we live now.”

In thing (3), I would replace can with should. Scholars have a moral responsibility to make their work responsive to the needs (as the scholars themselves see them) of the society that supports them.

Libraries and copyfraud

For the last week, I’ve been exchanging emails with curators at the Huntington Library about their use policies for digital images. For the Darwin Day 2009 Main Page effort on Wikipedia, I’ve been putting together a list of portraits of Darwin. Although a number of websites have significant collections of Darwin images, there isn’t any single comprehensive collection. One interesting shot I came across is an 1881 photograph, possibly the last one before Darwin’s death, that was allegedly “rediscovered” in the mid-1990s when a copy was donated to the Huntington. Press releases and exhibition descriptions invite people to contact the Huntington to request images, so I requested the Darwin photo. The response I got was typical of how libraries and archives deal with digital copies of rare public domain material.

The Huntington quoted distribution fees for the digital files (different sizes, different prices), and also asked for specific descriptions of how the image would be used, so that the library could give explicit permission for each use. Had I wanted to use it for more than just publicity (e.g., in a publication) more fees would apply. Apparently the curators were not used to the kind of response they got back from me: I politely but forcefully called them out for abusing the public domain and called their policy of attempting to exert copyright control over a public domain image “unconscionable”.

In the exchange that followed, I tried to explain why the library has neither the moral nor legal right to pretend authority over the image (although, I pointed out, charging fees for distribution is fine, even if their fees are pretty steep). A Curatorial Assistant, and then a Curator, tried to explain to me that the Huntington actually has generous lending policies (you don’t “lend” a PD digital image, I replied), that while the original is PD, using the digital file is “fair use” that library has the right to enforce (fair use, by definition, only applies to copyrighted works, I replied), that having the physical copy entails the right to grant, or not, permission to use reproductions (see Bridgeman v. Corel, I replied), that other libraries and museums do the same thing (that doesn’t make it right, I replied), that big corporations might use it without giving the library a cut if they didn’t claim rights (nevertheless, claiming such rights is called copyfraud and it’s a crime, I replied), and finally that I should contact the Yale libraries and museums and see if they do things any differently (a return to the earlier “everyone else does it” argument with a pinch of ad hominem for good measure, to which I see no point in replying).

Unfortunately, the Curator is right that copyfraud is standard operating procedure for libraries and archives. Still, I think it’s productive to point out the problem each time one encounters it; sooner or later, these institutions will start to get with the program.

As an aside, the copyright status of this image is rather convoluted. The original is from 1881. The photographer, Herbert Rose Barraud, died in 1896. The version shown here (originally; now lost) is a postcard from 1908 or soon after, making it unquestionably public domain. It comes from the delightful site Darwiniana, a catalog of the reproductions and reinterpretations of Darwin’s image that proliferated in the wake of his spreading fame. Apparently, when the image was “rediscovered” in a donation to the Huntington, they thought it had never been published and was one of but two copies; a short article about the photograph appeared in Scientific American in 1995. Had it actually never been published until then, it would arguably be under copyright until 2047 because of the awful Copyright Act of 1976. I say “arguably” because of the vague definition of “publish” and the rules for copyright transfer (“transfer of ownership of any material object that embodies a protected work does not of itself convey any rights in the copyright”) combined with the fact that another copy exists would seem to indicate that, at the very least, the Huntington has no place claming copyright. Paradoxically, publishing it for the first time in 1995 would have extended the copyright to 2047 but would have made the Huntington and/or Scientific American into violators of the copyright of whoever actually owned it (which would likely be indeterminable). But if it had remained unpublished, it would be public domain. I’m still unclear about whether it would have been public domain before 2002, when the perpetual copyright window of the 1976 law closed.

UPDATE – My thanks to the others who’ve linked to and discussed this post:

The pope, Feyerabend and Galileo

Anytime you see a reference to Paul Feyerabend in the news, you can be almost certain that he’s being misinterpreted or taken out of context.

As newspapers have been reporting, the pope canceled a planned inaugural speech for the beginning of term at La Sapienza University, in response to the vehement objections of a group of scientists there. As the news reports would have it, the issue was that the pope (then Cardinal Ratzinger) had defended the heresy trial and conviction of Galileo, quoting philosopher of science Paul Feyerabend that the judgment against Galileo and his heliocentric theory was ‘rational and just’.

In this case (according to seemingly knowledgeable philosophers on the HOPOS mailing list and in the comments of this Leiter Reports post), Ratzinger invoked Feyerabend as one example of anti-rationalist thought, not necessarily as his own view. And the quote, while perhaps literally accurate, is a translation from the Ratzinger’s Italian speech, probably based on the German version of Feyerabend (either Against Method or Farewell to Reason). Feyerabend argued that the church’s position was rational in that the weight of scientific evidence really did favor heliocentrism at the time, and (to quote Barry Stocker’s comment from the Leiter post) ” had the right social intention, viz, to protect people from the machinations of specialists. It wanted to protect people from being corrupted by a narrow ideology that might work in restricted domains but was incapable of sustaining a harmonious life.”

That is, neither Feyerabend nor Ratzinger were suggesting that the judgment was just in the sense of Galileo having been wrong about heliocentrism (or his interpretation of scripture to square with heliocentrism).

But to be fair to the scientists protesting the pope’s speech, their main issue is not Galileo but the Vatican’s positions about the relationship between science and the church. As one professor explained on the CBC’s As It Happens (part 1, about 18 minutes in), it’s the tension between a religious authority and a secular university that’s the real issue; the pope has no place in the secular scholarly activities of the university, he argues.

But Galileo vs. the Church is always a good hook for a story. Don’t expect the misuse of the Galileo Affair, or of Feyerabend, to go away any time soon.

The End of the History of Science?

I went to a handful of interesting talks at HSS this year.

The first was the tail-end of a session on astrology (Kepler’s, in particular), which underscored the importance of the social and political forces that were driving–and have been written out of–the Scientific Revolution. The need for better, more accurate astrological advice for kings and emperors was the reason people like Kepler and Tycho Brahe had the support to do their work, and to a large extent astrology was why they were doing astronomy. Disagreements over the scope and validity of astrology also were part of the under-explored dynamics of intra-Protestant theological politics that buffeted Kepler and Tycho from patron to patron. The situation with early modern alchemy, driven more by practical than mystical concerns, has similarly been neglected in the big-picture accounts. Neither astrology nor alchemy figure much into Peter Dear’s 2001 Revolutionizing the Sciences or Steven Shapin’s 1996 The Scientific Revolution, supposedly the two main post-“social turn” Sci Rev reevaluations.

The next good talk was Stephen Weldon’s on Francis Schaeffer and his influence of modern American Protestant attitudes toward science. Anyone trying to understand the Intelligent Design movement and the reasons it has been considerably more successful among non-Fundamentalists than the Creation Science of the 1970s and 80s was, needs to know about Schaeffer.

But the most interesting session was The End of Science. It was nominally organized around John Horgan’s 1996 book The End of Science. Unfortunately, Horgan phoned it in on this one, delivering a talk that basically consisted of his 2006 Discover magazine article (which I blogged about a year ago when I first discovered Horgan’s work). But between Horgan and Andre Wakefield’s talk on “The End of the History of Science?”, discussing the disciplinary fate of history of science as something set apart from garden-variety history, there was plenty to rile up the crowd (as much as historians can get riled up). Wakefield was celebrating the facts that (unlike in the bad old days of Sartonian handmaiden-to-science history) one no longer needs to understand the science one does the history of, and that history of science is being absorbed into the disciplinary structure of straight history.

One of the striking things about HSS is how little one historian has in common with the next. There were up to 12 sessions going on at once, so you could stay within your temporal, geographical and disciplinary areas of interest (and probably within your historiographical approach, as well). One of the things meetings like this make apparent is the degree to which collegiality and networking (along with university press editors) drive careers in history of science (and in history more generally), rather than peer evaluations of intellectual output. It’s all about the parties and receptions after the day’s talks are over.

How does Wikipedia affect experts?

Britannica Blog’s Web 2.0 forum is wrapping up this week. On the Wikipedia front, Michael Gorman has delivered his promised Wikipedia post, and danah boyd has an exceptional reply on why Wikipedia, and access to knowledge in general, is important. While Gorman’s posts are consistently vapid and unprovocative (except in the sense that cable news talking points are provocative), some of the other new media critics–particularly Seth Finkelstein–highlight an important issue that I think is at the heart of the debate. Namely, how do Wikipedia and other aspects of the read/write web knowledge ecosystem affect experts/professionals and their traditional systems of knowledge production?

The shared assumption amongst critics is that the effects are largely negative. Finkelstein put it dramatically in a comment directed at boyd. He characterizes the ‘experts should stop complaining about Wikipedia’s problems and just fix them’ refrain as “arguing that “capitalists” should give – not even sell, but give – Wikipedia the rope to hang them with!” He adds, “If an expert writes a good Wikipedia article, that gets claimed as the wisdom of crowds and presented as proof that amateurs can do just as well as experts.” So scholars are putting themselves out of a job by contributing to Wikipedia and the like.

Setting aside the ‘Wikipedia=Wisdom of Crowds’ strawman that so many Wikipedia critics knock down as their first and final argument, Finkelstein (and some of the others) hit on an important argument: amateur-produced knowledge products (often of inferior quality) are free, and this endangers the (political, financial, intellectual, and/or cultural) economies of expertise. But is that true? Is Wikipedia reducing the demand for scholarly monographs? Is writing a good Wikipedia article on the history of biology going to cut into the sales of all the sources I cite? Is it going to fill the demand for history of biology scholarship and make it tougher to find a publisher for my own work? In economic terms, the competition argument against Wikipedia assumes that traditionally-produced expert knowledge and community-produced knowledge are substitute goods with respect to each other (and are not substitute goods with respect to even lower quality knowledge products like cable news, tabloids, and CNN.com), that either demand for knowledge is relatively static or increased consumption isn’t necessarily desirable, and that knowledge products do not have significant prestige value linked to their traditional pricey modes of production (i.e., they aren’t Veblen goods).

Which of these assumptions holds true differs according to what genre of knowledge we’re talking about. Wikipedia is obviously a substitute for traditional encyclopedias (even if inferior); the Wikipedia threat has been obvious to Britannica and her since 2003. And while consumption of encyclopedia-style knowledge has increased tremendously, critics can argue that the quality is so inferior that it isn’t worth the displacement of traditional encyclopedia consumption. Britannica is also realizing that the mystique of their brand isn’t what it used to be; raising prices in certainly not going to increase demand. So for the encyclopedia genre, Wikipedia is harmful to the traditional expert production system, and possibly (depending on the quality level) harmful to society as a whole.

For original expert research, the stuff of scholarly books and journal articles, the situation is very different. In some cases, Wikipedia articles might act as subsitute goods for scholarly books and journals. However, an encyclopedia article is a fundamentally different knowledge product from an original journal article. The typical journal article is far deeper, and far less accessible, than the approximately corresponding Wikipedia article. My feeling is that rather than act as substitutes, Wikipedia articles and expert research usually contribute to network effects: a good Wikipedia article draws in new knowledge consumers, some of whom then delve into the expert research. In the world of the ivory tower, a humanist scholar usually has to worry much more about competition from the countless other topics out there than about an oversupply of work on one’s own topic. The more people hear about your topic, the more demand there is for your expertise.

News is the other main genre to consider. The newspaper industry has been in a downward spiral for years. Television news is a powerful competitor, and it’s plausible (though by no means obvious) that Wikipedia, citizen journalism, and the blogosphere are contributing to the slow death of the newspapers as well. (Sadly, it’s not plausible that Wikinews is contributing to the downfall of print journalism. At some point, the disintegration of professional journalism may reach a critical mass, and citizen journalism will step up to fill the holes left by the shrinking New York Times. Wikinews has the potential to become the most important media organization in the world, but at this point it still has virtually no impact beyond the Wikimedia community.)

But it seems that the shift to the web (with its drastically lower ad revenues) and the competition among newspapers that can now compete across the country (or even globally) is the main cause. Papers used to have more-or-less local monopolies for print news; they would buy national and world news from the wire agencies (for which they were the only local suppliers) and pour most of their revenue into local and regional reporting. But now, any paper can use the internet to hock national and international news, and consumption of that kind of knowledge product is fairly static (at least compared to encyclopedia article consumption). So competition lowers the price of broad news (with modest increases in text-based news consumption, at best) and restricts the production (and increases the price) of local and investigative journalism.

The situation is much the same as the late-19th/early-20th century steel and railroad industries: there is just to much competition for a stable marketplace, so we’re seeing mergers and increasingly powerful media conglomerates (and the government is more willing to sign off on cross-media mergers). So I don’t think that web 2.0 knowledge products are responsible for the troubles of professional journalism, but if they don’t step up to fill the gaps, maybe nothing will.

Roger Kimball (cultural critic and c0-editor of the conservative literary magazine The New Criterion) has a sharp take on the dangers of cyberspace (hint: they have nothing to do with threats to traditional expertise and everything to do with the real world we’re missing as we piddle around in virtual worlds). Kimball’s points are worth keeping in mind, and on the topic of journalism and web 2.0, one of the key ways to avoid some of the dangers of cyberspace is to create and participate in online communities that are focused on the real world (e.g., Wikinews and the parts of Wikipedia that are not about entertainment).

(P.S.: I don’t actually know anything about economics, so treat my analysis like you would a Wikipedia article)

Bonus links:

Britannica Blog asks “Web 2.0: Threat or Menace?”

The Britannica Blog (“where ideas matter”) is holding a Web 2.0 Forum, built on Michael Gorman‘s contention that the internet is in the process of rapidly destroying civilization. Gorman, former president of the American Library Association, engages in two posts worth of polemic that may pass for discourse in a few disreputably forms of traditional print, but online is unmistakably in the genre of trolling. In “Web 2.0: The Sleep of Reason” Part I and Part II, Gorman lays out an argument about the evils of the wisdom of crowds, citizen journalism, the cult of the amateur, digital Maoism, and the online retreat from authority and authenticity.

Clay Shirky disassembles Gorman’s essay nicely with “Old Revolutions Good, New Revolutions Bad: A Response to Gorman“. At the Britannica forum, Andrew Keen and Nicholas Carr (to overgeneralize slightly) back Gorman up, while Matthew Battles tries to bring some common sense to the high and mighty.

Gorman has promised a post next week dedicated specifically to Wikipedia (the epitome of the Web 2.0 devolution of traditional media and respect for authority, and the worst thing on the internet since pornography). The forum is scheduled to run until June 29th, and if it keeps up the current post frequency (5 posts in three days) it should be worth keeping up with.

Bonus Link: Ben Yates of the Wikipedia Blog had an insightful critique of the Britannica Blog a while back.

MIT dean of admissions faked credentials

Marilee Jones, the MIT dean of admissions who has set the tone for making college admissions less of a ridiculous and unhealthy process at elite schools, resigned today after it was revealed that she had faked her credentials. In fact, she has no college degrees (rather than the three she had claimed since beginning at the MIT admissions office in 1979).

In other news, I’m thinking of dropping out of grad school to start my own degree mill. I’ll start by awarding myself a Doctorate of Mad Science in Flesh Reanimation, and a Masters of Disinformation Science.

The academic job market, graduate education, the 2-4 Project, and GESO

As most graduate students in the humanities and social sciences know, the academic job market is crap. According to the recent Responsive Ph.D. report by the Woodrow Wilson National Fellowship Foundation (full PDF here), “as few as two out of every ten” graduates in “disciplines like history and English” will get tenure-track jobs. (The report is unfortunately vague about what other disciplines are like history and English, and it has no references for where the figure comes from, but it seems believable.)

A closely related problem is the ever-growing time-to-degree. In the fields with the worst job markets, competitions is most intense and students feel they have to put that much more effort into dissertations to be competitive. Thus, it is not uncommon for humanists to spend 8 or even 10 years in graduates school.

The Responsive Ph.D. report lays out a set of four principles and four accompanying “themes” that make up the gist its conclusions:

  1. “A Graduate School For Real” (theme: new paradigms) — Graduate schools and their deans should have more authority within research universities, and graduate programs should be the intellectual center of the university. Scholarship should remain at the center of graduate education (despite calls to de-emphasize it in previous reports).
  2. “A Cosmopolitan Doctorate” (theme: new practices) — Graduate training needs to be more relevant to the real world, with more effort put into pedagogy and into the application of academic knowledge.
  3. “Drawn From the Breadth of the Populace” (theme: new people) — Graduate schools need to train more people of color. Non-whites are more interested in applying their expertise in socially significant ways, so this goes hand in hand with principle 2.
  4. “An Assessed Excellence” (theme: new partnerships) — Graduate programs need to evaluate themselves critically, and graduate schools need to evaluate their individual programs. And these evaluations need to “have teeth” in terms of funding, and they need to connect to needs of the broader system that employs graduates as well.

The first principles is not surprising: ask graduate deans how to change the system, and the they answer “give us more authority and a bigger budget.” Emphasis on scholarly depth is a half-hearted one; graduate school still has to train the elites of the next academic generation, but the uselessness of most of graduate training for anything but learning to do (overspecialized, esoteric, socially near-useless) research is getting harder to ignore.

The second principle is where it gets everything right. That’s what I’ve been screamin’ for a while now.

The third principle is nice in principle, but lack of diversity in graduate school is a problem caused almost entirely at lower levels (i.e., lack of educational opportunity at the primary and secondary levels, and to a lesser extent in undergraduate education). Class is the real underlying issue, and I don’t think addressing the problem in terms of race is an efficient way to move forward in the long term.

The fourth principle is a good one, I think. Graduate schools ought to be more free to shrink or eliminate weak programs or programs in fields that can’t absorb enough graduates.

In response to the Responsive Ph.D. report, Yale created the “2-4 Project“, an effort to seek suggestions and then implement changes in the structure of the second through fourth years of graduate training. I think most of the proposed changes would be positive; moving the first year of teaching to the second year (concurrent with coursework, a portion of which might be moved later) is a great idea, as is reduction in the time-to-candidacy. The other aspects are fairly minor, but in my view either good or neutral changes.

GESO, the attempted grad student union that has never quite managed a credible majority, has been strongly critical of the 2-4 Project (see this brochure), especially the rushing out of grad students and encouragement to scale back dissertations. Consistent with GESO’s view of grad students as semi-professional teacher-scholars (with the same academia wide de-emphasis on the “teacher” part), they strongly resist moves to make graduate school anything but a six-year (or more) all-expenses-paid research sabbatical for the preparation of the paradigm shifting work of scholarship that is the dissertation. They want more senior faculty, less faculty teaching load, more grad student funding and less teaching requirements.

A GESO organizer had an article in the YDN on October 18 about the 2-4 Project. A sixth-year in Germanic Languages and Literature, he makes an almost unbearably pretentious statement that sums up much of what I find wrong in the culture of the academy: “Writing a major intervention in my field takes time [seven to nine years]. That is what I was brought here to do, and it is what I intend to accomplish.” “Intervening” in a field, seemingly for the sake of intervening, is the high calling of the academy. And no matter how long it takes, it’s worth it (he is, after all, one of the chosen ones, “brought here” on a mission, and entitled to his turret in the ivory tower). In defense of the lengthening time-to-degree, he cites the 2-in-10 statistic above (ignoring, of course, that for Yale Ph.D.s, it’s probably closer to 8-in-10 who end up in tenure-track jobs). He writes off non-academic careers in the usual way: they’re fine for “students who want them”, not that there’s anything wrong with that. The problem is that reforming graduate education to incorporate and validate nonacademic career paths (another part of 2-4 he and GESO oppose) is the only way to give intellectual legitimacy to anything beyond that ivory tower model of sagacity.

Another 2-4 issue was grading reform; it was opposed by GESO and, as it turns out, a majority of grad students, and it was recently rejected by the faculty. Yale grad students can get one of three passing grades: Honors, High Pass, and Pass, while the proposal would have changed it to letter grades with pluses and minuses. As it is, grades at Yale mean nothing and are not very informative in terms of feedback; grade inflation being what it is, you have to try to earn below an HP. Since grad students themselves are the only ones who are ever likely to see grad school grades, grade reform seemed like a good idea. But apparently lots of people think grade fear would make students less likely to be adventurous in their course-taking. Meh. If you’re that afraid of having your ego bruised (since that would the only repercussion of getting a C, the de facto bottom of the Yale grading scale), then you’ll get no sympathy from me.