Solicitor General's Brief in Cablevision Case

The United States Solicitor General’s office has filed its brief (posted online here) in the long-running RS-DVR matter, popularly referred to as the “Cablevision” case. The brief is terrific. The United States takes the position that the Supreme Court should not review the case, which had been decided unanimously by the Second Circuit in favor of the cable companies. This case has significant copyright implications, as well as implications for the balance of power between cable providers and those who hold copyright interests in television and movie programming.

The Solicitor General takes the position that the case did not meet the traditional standard for the Supreme Court to grant cert and that the Second Circuit “reasonably and narrowly resolved the issues” before it. The reasoning in the brief is persuasive.

For more information: Several news outlets have the story. (The Reuters piece says that the SG “denied” the plaintiffs’ request for a hearing, which — at least in technical terms — overstates the matter a bit by implying decision-making authority in the SG. Though the Court asked for the SG’s opinion, the Court reserves the right to decide whether or not to hear the case. Practically speaking, though, that seems somewhat unlikely now, after the filing of this strong brief.) For previous coverage which touches on the procedural aspects of the case, see, e.g., an article by the LA Times’s David G Savage from January, 2009. Also, see the press release and summary page on the case published by Public Knowledge, which has worked on this matter; Gigi Sohn, the president, says she is pleased with the SG’s brief.

By way of disclosure: the United States Solicitor General and counsel of record in this matter, Elena Kagan, is my former boss when she was dean of Harvard Law School for six years prior to her appointment to the Obama Administration.

Learning Race and Ethnicity, in the MacArthur Foundation/MIT Press Series

Learning, Race and Ethnicity: Youth and Digital Media is the fourth book I’ve read in the MacArthur/MIT Press Series on Digital Media and Learning. This volume, edited by Anna Everett, is the furthest from my own field — law — and, for me, the most challenging.

Prof. Everett’s opening essay, (which follows the excellent foreword by the series authors, as with each volume in the series), is an effective overview of what follows in the volume. She takes up the familiar debate about the term “digital divide” and why it now rankles more than it helps. She also reminds us that the old joke about how online nobody knows you’re a dog is no longer true, with the advent of rich media and other “advances” in digital technology and how it’s used. I was left, from her chapter, with one line resonating in particular: “the color of the dog counts.” (p. 5)

The rest of the volume consists of three clusters. Future Visions and Excavated Pasts is the first. Dara Byrne leads off with a piece on the future of race. She pulls in and incorporates a series of great quotes from message boards and other online public spaces; takes up (and takes on) John Rawls on the public/private question that runs through so many of our discussions of online life, (p. 22); and digs deep on the future of whether there will be dedicated sites for different races as we look ahead. The punchline is that yes, “minority youth must have access to dedicated online spaces, not just mainstream or ‘race neutral’ ones.” (p. 33)

Tyrone Taborn’s “Separating Race from Technology” is the other essay in this first cluster. Tayborn compares the likelihood of any group of students (“majority white or minority, rich or poor”) knowing Kobe Bryant and Dr. Mark Dean, the African-American engineer involved in IBM’s development of the first PC. His point is clear. As one of a series of possible solutions to the problem of too few minority youth having mentors and heroes in the technology world, Tayborn calls for Digital Media Cultural Mentoring (p. 56).

The second cluster of essays take up art and culture in the digital domain. Raiford Guins guides the reader through a tour of the ways that hip-hop culture, art, and use of technology come together online in the form of “black cultural production in the form of hip-hop 2.0.” (p. 78) It’s a must-read essay; heplful to read with a browser open and a fast broadband connection on tap. Guins has an intriguing segment on the future of the music label, among other take-aways (p. 69 – 70).

Guins’ essay is well-paired with Chela Sandoval and Guisela Latorre’s celebration and contextualization of Judy Baca’s work at the Social and Public Art Resource Center (SPARC) in LA. (One wonders why LA gets more than its fair share of intriguing digital media production experiments and narratives?) Among other things, Sandoval and Latorre challenge the notion of “digital youth” and the challenges of overly delimiting based just on age — a helpful reminder of a point too easily forgotten. (p. 85) In the final essay of the cluster, Antonio Lopez offers insights into (and concerns about) digital media literacy with respect to Native American populations, told largely in the first person.

Jessie Daniels opens the third cluster with a jarring piece on hate, racism, and white supremacy online. Daniels picks up on themes about the fallacy of colorblindness established in Anna Everett’s introduction. With a link to Henry Jenkins‘ work, Daniels argues for a “multiple literacies” approach to shaping our shared cultural future online and offline. (p. 148 – 50)

Yet more jarring, to me anyway, is Douglas Thomas’s piece on online gaming cultures, called “KPK, Inc.: Race, Nation, and Emergent Culture in Onling Games.” Thomas draws us into gaming environments only to reveal a culture of wild adventure, first-person shooter games, acquisition, treasure, money, and hate all rolled together. The crux of his argument centers on the “Korean problem,” (p. 163-4), a blend of bigotry, nationalism, and competitiveness. The racists that Thomas exposes “are usually Americans / Canadians and white” — and gamers. (p. 164) Along the way, Thomas distinguishes his approach from that of our Berkman colleague Beth Kolko. (p. 155-6)

The final essay, by Mohan Dutta, Graham Bodie, and Ambar Basus takes us in a new direction, further afield, toward the intersection of race, youth, Internet, health, and information. The authors synthesize a great deal of disparate information in unexpected ways. The essay left with an expanded frame of vision, and a frame that I never would have come up with on my own. Their punchline: “disparities in technology uses and health information seeking reflect broader structural disparaties in society that adversely affect communities of color.” (p. 192)

On balance, this collection of essays hangs together very well. Each essay takes a on strong point of view. Overall, the collection both informed my thinking and provoked more by raising hard issues about the impact of growing up online for race, ethnicity, identity, and health.

Eszter Hargittai on Digital Na(t)ives

We have the great pleasure today at the Berkman Center of hearing from Eszter Hargittai, a prof at Northwestern, on her large-scale research project on how 18 / 19-year-olds use digital technologies. She’s also worked on problems related to what she calls the “second-level digital divide” over the past decade or so. She surveyed over 1000 students at the UIC, one of the most diverse research universities.

A set of important take-aways: she’s found a correlation between gender and the likelihood of creating and sharing digital content (women were less likely to share content online that they’ve created than men). But it turns out that skill level is actually the relevant factor, not gender: if you correct for skill-level, the gender difference goes away. She is also trying to figure out what these gaps mean in terms of opportunities for life chances.

Her research hones in on the fact that what matters are skill differences, not just technology access differences, when it comes to digital inequality. We need to provide training and education for kids in addition to access to the network. These findings — good news for her — are consistent with Eszter’s extensive body of work to date. And she’s plainly right. (This is much of what Urs Gasser and I are arguing in our book, Born Digital; we have to figure out how to say it half as elegantly as Eszter does.)

Eszter has an article coming out very soon, in a volume co-edited by danah boyd and Nicole Ellison, which makes a related set of claims. Her data inform the question of who uses social-networking sites (SNS). Women, she finds, are more likely to use SNSes than men (other than in the context of Xanga, where the numbers are reversed). People of with parents with lower academic backgrounds (which apparently correlates to lower social-economic status, (SES), backgrounds) are more likely to be MySpace users, and those with parents with higher educational backgrounds are more likely to use Facebook. (These data lead to conclusions much like what danah boyd claimed recently, and which kicked up a bit of a storm. See the 297 comments on danah’s blog.)

If you missed Eszter’s talk, it’s worth catching it online at MediaBerkman.

(Separately: she’s also got thoughtful comments on her blog about our pending Cookie Crumbles video contest.)

Three Conversations on Intellectual Property: Fordham, University of St. Gallen, UOC (Catalunya)

Three recent conversations I’ve been part of offered a contrast in styles and views on intellectual property rights across the Atlantic. First, the Fordham International IP conference, which Prof. Hugh Hanson puts on each year (in New York, NY, USA); the terrific classes in Law and Economics of Intellectual Property that Prof. Urs Gasser teaches at our partner institution, the University of St. Gallen (in St. Gallen, Switzerland); and finally, today, the Third Congress on Internet, Law & Politics held by the Open University of Catalonia (in Barcelona, Spain), hosted by Raquel Xalabarder and her colleagues.

* * *

Fordham (1)

At Fordham, Jane Ginsburg of Columbia Law School moderated one of the panels. We were asked to talk about the future of copyright. One of the futures that she posited might come into being — and for which Fred von Lohmann and I were supposed to argue — was an increasingly consumer-oriented copyright regime, perhaps even one that is maximally consumer-focused.

– For starters, I am not sure that “consumer” maximalization is the way to think about it. The point is that it’s the group that used to be called the consumers who are now not just consumers but also creators. It’s the maximization of the rights of all creators, including re-creators, in addition to consumers (those who benefit, I suppose, from experiencing what is in the “public domain”). This case for a new, digitally-inspired balance has been made best by Prof. Lessig in Free Culture and by many others.

– What are the problems with what one might consider a maximalized consumer focus? The interesting and hardest part has to do with moral rights. Prof. Ginsburg is right: this is a very hard problem. I think that’s where the rub comes.

– The panel agreed on one thing: a fight over compulsory licensing is certainly coming. Most argued that the digital world, particularly a Web 2.0 digital world, will lead us toward some form of collective, non-exclusive licensing solution — if not a compulsory licensing scheme — will emerge over time.

– “Copyright will be a part of social policy. We will move away from seeing copyright as a form of property,” says Tilman Luder, head of copyright at the directorate general for internal markets at the competition division of the European Commission. At least, he says, that’s the trend in copyright policy in Europe.

* * *

Fordham (2)

I was also on the panel entitled “Unauthorized Use of Works on the Web: What Can be Done? What Should be Done?”

– The first point is that “unauthorized use of works” doesn’t seem quite the relevant frame. There are lots of unauthorized uses of works on the web that are perfectly lawful and present no issue at all: use of works not subject to copyright, re-use where an exception applies (fair use, implied license, the TEACH Act, e.g.s), and so forth. These uses are relevant to the discussion still, though: these are the types of uses that are

– In the narrower frame of unauthorized uses, I think there are a lot of things that can be done.

– The first and most important is to work toward a more accountable Internet. People who today are violating copyright and undermining the ability of creators to make a living off of their creative works need to change. Some of this might well be done in schools, through copyright-related education. The idea should be to put young people in the position of being a creator, so they can see the tensions involved: being the re-user of some works of others, and being the creator of new works, which others may in turn use.

– A second thing is continued work on licensing schemes. Creative Commons is extraordinary. We should invest more in it, build extensions to it, and support those who are extending it on a global level (including in Catalunya!).

– A third thing, along the lines of what Pat Aufderheide and Peter Jaszi are doing with filmmakers, is to establish best practices for industries that rely on ideas like fair use.

– A fourth thing is to consider giving more definition to the unarticulated rights — not the exclusive rights of authors that we well understand, but the rights of those who would re-use them, to exceptions and limitations.

– A fifth area, and likely the discussion that will dominate this panel, is to consider the role of intermediaries. This is a big issue, if not the key issue, in most issues that crop up across the Internet. Joel Reidenberg of Fordham Law School has written a great deal on this cluster of issues of control and liability and responsibility. The CDA Section 230 in the defamation context raises this issue as well. The question of course arose in the Napster, Aimster, and Grokster contexts. Don Verrilli and Alex Macgillivray argued this topic in the YouTube/Viacom context — the topic on which sparks most dramatically flew. They fought over whether Google was offering the “claim your content” technology to all comers or just to those with whom Google has deals (Verilli argued the latter, Macgillivray the former) and whether an intermediary could really know, in many instances, whether a work is subject to copyright without being told by the creators (Verilli said that wasn’t the issue in this case, Macgillivray says it’s exactly the issue, and you can’t tell in so many cases that DMCA 512 compliance should be the end of the story).

* * *

St. Gallen

Across the Atlantic, Prof. Dr. Urs Gasser and his teaching and research teams at the University of St. Gallen are having a parallel conversation. Urs is teaching a course on the Law and Economics of Intellectual Property to graduate students in law at St. Gallen. He kindly invited me to come teach with him and his colleague Prof. Dr. Bead Schmid last week.

– The copyright discussion took up many of the same topics that the Fordham panelists and audience members were struggling with. The classroom in Switzerland seemed to split between those who took a straight market-based view of the topics generally and those who came at it from a free culture perspective.

– I took away from this all-day class a sense that there’s quite a different set of experiences among Swiss graduate students , as compared to US graduate students, related to user-generated content and the creation of digital identity. The examples I used in a presentation of what Digital Natives mean for copyright looking ahead — Facebook, MySpace, LiveJournal, Flickr, YouTube, and so forth — didn’t particularly resonate. I should have expected this outcome, given the fact that these are not just US-based services, but also in English.

– The conversation focused instead on how to address the problem of copyright on the Internet looking forward. The group had read Benkler, Posner and Shavell in addition to a group of European writers on digital law and culture. One hard problem buried in the conversation: how much help can the traditional Law and Economics approach help in analyzing what to do with respect to copyright from a policy perspective? Generally, the group seeemed to believe that Law and Economics could help a great deal, on some levels, though 1) the different drivers that are pushing Internet-based creativity — other than straight economic gains — and 2) the extent to which peer-production prompts benefits in terms of innovation make it tricky to put together an Excel spreadsheet to analyze costs and benefits of a given regulation. I left that room thinking that a Word document might be more likely to work, with inputs from the spreadsheet.

* * *

Barcelona

The UOC is hosting its third Congres Internet i Politica: Noves Perspectives in Barcelona today. JZ is the keynoter, giving the latest version of The Future of the Internet — and How to Stop It. The speech just keeps getting better and better as the corresponding book nears publication. He’s worked in more from StopBadware and the OpenNet Initiative and a new slide on the pattern of Generativity near the end. If you haven’t heard the presentation in a while, you’ll be wowed anew when you do.

– Jordi Bosch, the Secretary-General of the Information Society of Catalonia, calls for respect for two systems: full copyright and open systems that build upon copyright.

Prof. Lilian Edwards of the University of Southhampton spoke on the ISP liability panel, along with Raquel Xalabarder and Miquel Peguera. Prof. Edwards talked about an empirical research project on the formerly-called BT Cleanfeed project. BT implements the IWF’s list of sites to be blocked, in her words a blacklist without a set appeals process. According to Prof. Edwards’ slides, the UK government “have made it plain that if all UK ISPs do not adopt ‘Cleanfeed’ by end 2007 then legislation will mandate it.” (She cites to Hansard, June 2006 and Gower Report.) She points to the problem that there’s no debate about the widespread implementation of this blacklist and no particular accountability for what’s on this blacklist and how it is implemented.

– Prof. Edwards’ story has big implications for not just copyright, but also the StopBadware (regarding block lists and how to run a fair and transparent appeals process) and ONI (regarding Internet filtering and how it works) research projects we’re working on. Prof. Edwards’ conclusion, though, was upbeat: the ISPs she’s interviewed had a clear sense of corporate social responsibility, which might map to helping to keep the Internet broadly open.

For much better coverage than mine, including photographs, scoot over to ICTology.

Viacom Believes Fewer Than 60 Take-Down Mistakes

I’ve been e-mailing with Michael Fricklas of Viacom since I posted about Jim Moore’s home video that got caught in Viacom’s 100,000 take-down push on Friday. Mr. Fricklas wrote to me a few times during their process of assessing how many errors they made out of 100,000. Today, he wrote: “… we’re achieving an error rate of .05% – (we have under 60 errors so far)” and that “we’ll know more as users respond to communication from YouTube”. He noted also: “Wish it was zero.”

So, let’s take Viacom at its word for the moment. A few interesting questions of law pop out from here:

1) If Viacom is right 99,940 times out of 100,000. What rights do those 60 people have when they choose to push back? Just to have the file put back up? Do they have a further claim against Viacom? Or against YouTube, for that matter?

2) Mr. Fricklas asserts that “Under DMCA, I believe that YouTube needs to retain the material and repost it if an individual believes that the copyright notice was in error.” I suppose that Section 512(g) does include the presumption that YouTube (or similarly situated party) must hold on to the allegedly infringing material once taken down, since they may have to put it back up pursuant to counter-notification. But the process of what the intermediary has to do is not explicit.  What happens to the analysis if YouTube has retained nothing, and the original person who posted it retained nothing but has a very strong fair use case or an outright winner on copyright grounds? Does DMCA need to say more than it does by way of a process to protect users?  There’s also the question of what policy is required to handle repeat infringers, which has caused a lot of confusion on university campuses.
Some good exam questions buried here.

A Voice from Outside the US on the Viacom-YouTube Matter

Jaegercat” writes in a discussion board on this topic: “I don’t live in the US. I’ve already responded with the counter-notification via fax, but I have no idea how to proceed from here if they don’t respond. The video that they pulled was an original work that took me around 5 months to make, that has been shown in a film festival, and I feel violated at the public accusation that this wasn’t my own work. … I’m definitely interested in collective action, even though I don’t even know if I’m entitled to be part of it.”

What's the "Day 2" Story on the Viacom-YouTube Tussle?

Google News suggests that there have been about 500 stories so far written in this news sources that they scan on the topic of Viacom’s 100,000 take-down notices to YouTube users. Most of the stories focus on the business dynamics of the matter, understandably: 1) why Viacom did this; 2) the possibility (or likelihood, or unlikelihood, depending upon whom you ask) of a license deal in the offing between the two entities; 3) the response from YouTube/Google to the take-downs; 4) the status of the enhanced tools for copyright owners who want to track their works that they believe to be illegally posted; and so forth.

A few possible Day 2 stories that have not been discussed extensively in the MSM coverage, and of greater interest to me:

– How many of the 100,000 notices were mis-fires, like the one to Jim Moore? A few hundred, a few thousand? (Is this person one of them?) And what is the impact of those mistakes? Is there any pushback against the copyright holder who made these mistakes? Any liability, say under DMCA Section 512(f)? (Top10Sources, with which I work, is seeking to aggregate these stories and links to the clips that are put back up so we can all judge for ourselves.)

– Does it matter under the law whether YouTube provides the enhanced copyright protection tools that are bandied about in many of these articles? Could they release them selectively, say to those who license with them and not to those who do not?

– Why isn’t Viacom doing what CBS has done, for instance (as a Forrester analyst is asking on Charlene Li’s blog)?

– Who will build a service to compete with YouTube? Will the policy for handling copyright matter, one way or another, in terms of customer adoption of competing services?

– Is there a copyright reform strategy, and/or one or a series of business ideas (like Lisensa, e.g., with which I am involved) or extensions to NGOs like Creative Commons, that can help address the copyright crisis that continues to rage on the web?