Internet & Democracy: China, Iran, the Arabic Blogosphere

These are heady days for the study of Internet and its relationship to the practice of politics and the struggles over democratic decision-making. Three stories — in China, in Iran, and throughout the Arabic-speaking world — make a powerful case for the deepening relevance of the use of new technologies by citizens to the balance of political power around the world.

First, there was the Green Dam story. The Chinese government upped the ante in the Internet filtering business by announcing a new regulation on the providers of computer hardware. This regulation would require that new computers be shipped along with filtering software, the so-called Green Dam filtering software. We at the ONI released an analysis of this proposed software mandate. This story matters because having state-mandated software at the layer closest to the user would have an extraordinary chilling effect on the use of these technologies, not to mention the possibilities for censorship, surveillance, and other forms of control that such software would open up for the state. (Plus, there was an increase in censorship activity around June 4.)

Today, there is the crisis in Iran. At a moment of political upheaval, the key stories about what is happening on the ground is being told, and supplemented, by citizens on web 2.0 tools — blogs, Twitter, social networks, on sites like Global Voices. The State Department is reportedly working with Twitter to keep the service up — and the information flowing in and out from Iran, as traditional media find themselves more constrained than in other settings. I am imagining the conversation within the intelligence and diplomatic communities, and elsewhere in politics, about the value of this discourse and open source intelligence in general in these moments of crisis. If ever it were in doubt, I’d imagine today is helping to put many doubts to rest about the importance of this networked public sphere.

In the same spirit, tomorrow, we are releasing our study of the Arabic language blogosphere. The real-space, official session will take place at the United States Institute of Peace, as part of their wonderful “bullets to bytes” series. We’re delighted to have the chance to release our study with these terrific colleagues — and, together, to bust some myths about the networked public sphere in the Arabic world. The idea is to set forth a systematic, empirical study of the extraordinary public conversations we can observe in tens of thousands of blogs across the Arabic-speaking world.

What a week!

Turkey at the Edge

The people of Turkey are facing a stark choice: will they continue to have a mostly free and open Internet, or will they join the two dozen states around the world that filter the content that their citizens see?

Over the past two days, I’ve been here in Turkey to talk about our new book (written by the whole OpenNet Initiative team), called Access Denied. The book describes the growth of Internet filtering around the world, from only about 2 states in 2002 to more than 2 dozen in 2007. I’ve been welcomed by many serious, smart people in Ankara and Istanbul, Turkey, who are grappling with this issue, and to whom I’ve handed over a copy of the new book — the first copies I’ve had my hands on.

This question for Turkey runs deep, it seems, from what I’m hearing. As it has been described to me, the state is on the knife’s edge, between one world and another, just as Istanbul sits, on the Bosporus, at the juncture between “East and West.”

Our maps of state-mandated Internet filtering on the ONI site describe Turkey’s situation graphically. The majority of those states that filter the net extensively lie to its east and south; its neighbors in Europe filter the Internet, though much more selectively (Nazi paraphernalia in Germany and France, e.g., and child pornography in northern Europe; in the U.S., we certainly filter at the PC level in schools and libraries, though not on a state-mandated basis at the level of publicly-accessible ISPs). It’s not that there are no Internet restrictions in the states in Europe and North America, nor that these places necessarily have it completely right (we don’t). It’s both the process for removing harmful material, the technical approach that keeps the content from viewers (or stops publishers from posting it), and the scale of information blockages that differs. We’ll learn a lot from how things turn out here in Turkey in the months to come.

An open Internet brings with it many wonderful things: access to knowledge, more voices telling more stories from more places, new avenues for free expression and association, global connections between cultures, and massive gains in productivity and innovation. The web 2.0 era, with more people using participatory media, brings with it yet more of these positive things.

Widespread use of the Internet also gives rise to challenging content along with its democratic and economic gains. As Turkey looks ahead toward the day when they join the European Union once and for all, one of the many policy questions on the national agenda is whether and how to filter the Internet. There is sensitivity around content of various sorts: criticism of the republic’s founder, Mustafa Kemal Atatürk; gambling; and obscenity top the list. The parliament passed a law earlier in 2007 that gives a government authority a broad mandate to filter content of this sort from the Internet. To date, I’m told, about 10 orders have been issued by this authority, and an additional 40 orders by a court to filter content. The process is only a few months old; much remains to be learned about how this law, known as “5651,” will be implemented over time.

The most high-profile filtering has been of the popular video-sharing site, YouTube. Twice in the past few months, the authority has sent word to the 73 or so Turkish ISPs to block access, at the domain level, to all of YouTube. These blocks have been issued in response to complaints about videos posted to YouTube that were held to be derogatory toward the founder, Ataturk. The blocks have lasted about 72 hours.

After learning from the court of the offending videos, YouTube has apparently removed them, and the service has been subsequently restored. YouTube has been perfectly accessible on the connections I’ve had in Istanbul and Ankara in the past few days.

During this trip, I’ve been hosted by the Internet Association here, known as TBD, and others who have helped to set up meetings with many people — in industry, in government, in journalism, and in academia — who are puzzling over this issue. The challenges of this new law, 5651, are plain:

– The law gives very broad authority to filter the net. It places this power in a single authority, as well as in the courts. It is unclear how broadly the law will be implemented. If the authority is well-meaning, as it seems to me to be, the effect of the law may be minimal; if that perspective changes, the effect of the law could be dramatic.

– The blocks are (so far) done at the domain level, it would appear. In other words, instead of blocking a single URL, the blocks affect entire domains. Many other states take this approach, probably for cost or efficiency reasons. Many states in the Middle East/North Africa have blocked entire blogging services at different times, for instance.

– The system in place requires Internet services to register themselves with the Turkish authorities in order to get word of the offending URLs. This requirement is not something that many multinational companies are going to be able or willing to do, for cost and jurisdictional issues. Instead of a notice-and-takedown regimes for these out-of-state players, there’s a system of shutting down the service and restoring it only after the offending content has been filtered out.

* * *

The Internet – especially in its current phase of development – is making possible innovation and creativity in terms of content. Today, simple technology platforms like weblogs, social networks, and video-sharing sites are enabling individuals to have greater voice in their societies. These technologies are also giving rise to the creation of new art forms, like the remix and the mash-up of code and content. Many of those who are making use of this ability to create and share new digital works are young people – those born in a digital era, with access to high-speed networks and blessed with terrific computing skills, called “digital natives” – but many digital creators are grown-ups, even professionals.

Turkey is not alone in how it is facing this challenge. The threat of “too much” free expression online is leading to more Internet censorship in more places around the world than ever before. When we started studying Internet censorship five years ago, along with our colleagues in the OpenNet Initiative (from the Universities of Toronto, Cambridge, and Oxford, as well as Harvard Law School), there were a few places – like China and Saudi Arabia – where the Internet was censored.

Since then, there’s been a sharp rise in online censorship, and its close cousin, surveillance. About three dozen countries in the world restrict access to Internet content in one way or another. Most famously, in China, the government runs the largest censorship regime in the world, blocking access to political, social, and cultural critique from its citizens. So do Iran, Uzbekistan, and others in their regions. The states that filter the Internet most extensively are primarily in East Asia, the Middle East and North Africa, and Central Asia.

* * *

Turkey’s choice couldn’t be clearer. Does one choose to embrace the innovation and creativity that the Internet brings with it, albeit along with some risk of people doing and saying harmful things? Or does one start down the road of banning entire zones of the Internet, whether online Web sites or new technologies like peer-to-peer services or live videoblogging?

In Turkey, the Internet has been largely free to date from government controls. Free expression and innovation have found homes online, in ways that benefit culture and the economy.

But there are signs that this freedom may be nearing its end in Turkey, through 5651 and how it is implemented. These changes come just as the benefits to be reaped are growing. When the state chooses to ban entire services for the many because of the acts of the few, the threat to innovation and creativity is high. Those states that have erected extensive censorship and surveillance regimes online have found them hard to implement with any degree of accuracy and fairness. And, more costly, the chilling effect on citizens who rely on the digital world for their livelihood and key aspects of their culture – in fact, the ability to remake their own cultural objects, the notion of semiotic democracy – is a high price to pay for control.

The impact of the choice Turkey makes in the months to come will be felt over decades and generations. Turkey’s choice also has international ramifications. If Turkey decides to clamp down on Internet activity, it will be lending aid to those who seek to see the Internet chopped into a series of local networks – the China Wide Web, the Iran Wide Web, and so forth – rather than continuing to build a truly World Wide Web.

How Does a Foundation Program Officer Decide How to Make Grants?

At the Berkman Center’s lunch speaker series, Gary Kebbel of the Knight Foundation is with us today. I’m not sure that I’ve ever seen such a public, open discussion by a program officer of a foundation about how they do their work in funding great projects. The Knight Foundation has been running the News Challenge for a few years, and they seek to learn and improve their processes each time. This year, they doubled the number of applications and, even more impressive, they reached out successfully to a global set of applicants (good news, we think, coming from the Global Voices-style perspective, as we do here at the Berkman Center). Knight has also continued to innovate with ways for people to submit public or private applications to the consideration process.  One thing I learned: News Challenge applicants are free to read these comments, in the case of an open application, and then go back and revise and improve their application. They’ve also got a blog on PBS called Idea Lab, part of the PBS Media Shift blogging empire (hey! there’s David Ardia).

In the spirit of our interest in young people, Digital Natives, doing innovative things online: The most interesting experiment, from my perspective, is their work with MTV and MTV International on the Young Creators Award. They set aside $500,000 for this award, geared toward those 25-years-old and younger. Of the new young applicants, almost half are international.

Some of the upticks that they are seeing in the applications to this year’s News Challenge: Facebook applications, use of GPS-related tools, and place-tagging for wireless.

Grant-seekers and innovators and young creators around the world, watch Gary explain how the sausage is made when it comes to grant-making at the Knight Foundation. Watch also for commentary from uber-bloggers Ethan Zuckerman and David Weinberger and Lisa Williams, who are in the room here in real-time.

Digital Natives Conversation Goes International

One of the themes of Born Digital, the book Urs Gasser and I are working on, is excitement around the possibility of an emerging global culture of young people who use technology in particular ways. (We’re equally interested in the problems of those who may be left out of that emerging culture, too, as Ethan Zuckerman and Eszter Hargittai and others are quick to remind us.) It was fun, in this context, to see a few international responses to / reverberations of our post about definitions and subtleties around who is a “digital native” and who is not: one from Canada’s paper of record, the Globe and Mail; a few in Spanish; and a few in German; in Italian; and from our friend and colleague Shenja on the Media@LSE (London School of Economics) blog.

(Since this is a joint research project with our colleagues at the University of St. Gallen in Switzerland, I suppose it’s not really surprising — the conversation actually started internationally.)

Yahoo!, the Shi Tao Case, and the Benefit of the Doubt

Rep. Tom Lantos has called on Yahoo! executives to return to Congress to talk about what they knew and when in the Shi Tao case. Rep. Lantos alleges that Yahoo!’s general counsel misled a hearing (at which I and others submitted testimony, too) in 2006 by indicating that the company knew less than it actually did about why the Chinese state police were asking for information about Shi, a dissident and journalist. Yahoo! did turn over the information; the Chinese prosecuted Shi; he remains in jail; and the issue continues to point to the single hardest thing about our US tech companies doing business in places that practice online censorship and surveillance. The case has led to Congressional hearings, proposed legislation, shareholder motions, and lawsuits against Yahoo!

(For much more on the general topic of Internet filtering and surveillance, see the OpenNet Initiative’s web site, a consortium of four universities of which we are a part: Cambridge, Harvard Law School, Oxford, and Toronto.)

The hard problem at the core of this issue is that police come to technology companies every day to ask for information about their users. It is a fair point for technology companies to make that they often cannot know much about the reason for the policeman’s inquiry. It could be completely legitimate: an effort to prevent a crime from happening or bringing a criminal to justice. In the United States, these requests come in the context of the rule of law, including a formal reliance on due process. And every once in a while, a technology company pushes back on requests for data of this sort, publicly or privately. The process is imperfect, if you consider it from a privacy standpoint, but it works — a balance is found between the civil liberties of the individual and the legitimate needs of law enforcement to keep us safe and to uphold the rules to which we all agree as citizens.

This hard problem is much harder in the context of, say, China. It’s not the only example, but it’s the example here with Shi Tao. In Yahoo!’s testimony in 2006, Michael Callahan, the executive vice president and general counsel, said that Yahoo! did not know the reasons for the Chinese state police’s request for information about Shi.

You can read the testimony for yourself here. The relevant statement by Mr. Callahan is:

“The Shi Tao case raises profound and troubling questions about basic human rights. Nevertheless, it is important to lay out the facts. When Yahoo! China in Beijing was required to provide information about the user, who we later learned was Shi Tao, we had no information about the nature of the investigation. Indeed, we were unaware of the particular facts surrounding the case until the news story emerged.” (Emphasis mine.)

The key phrase: “No information about the nature of the investigation.” Not that the information was inconclusive, or vague, or hard to translate, or possibly of concern. “No information.”

Now, we are told, there’s a big disagreement about whether that testimony was accurate.

Rep. Lantos, in a statement yesterday, claims that Callahan misled the committee. Lantos writes: “”Our committee has established that Yahoo! provided false information to Congress in early 2006. … We want to clarify how that happened, and to hold the company to account for its actions both before and after its testimony proved untrue. And we want to examine what steps the company has taken since then to protect the privacy rights of its users in China.” Rep. Chris Smith (R-NJ) says it more harshly: “Last year, in sworn testimony before my subcommittee, a Yahoo! official testified that the company knew nothing ‘about the nature of the investigation’ into Shi Tao, a pro-democracy activist who is now serving ten years on trumped up charges. We have now learned there is much more to the story than Yahoo let on, and a Chinese government document that Yahoo had in their possession at the time of the hearing left little doubt of the government’s intentions. … U.S. companies must hold the line and not work hand in glove with the secret police.”

Yahoo! responded with its own statement, pasted here in full:

“Yahoo! Statement on Foreign Relations Committee Hearing Announcement
October 16, 2007

“The House Foreign Affairs Committee’s decision to single out Yahoo! and accuse the company of making misstatements is grossly unfair and mischaracterizes the nature and intent of our past testimony.

“As the Committee well knows from repeated meetings and conversations, Yahoo! representatives were truthful with the Committee. This issue revolves around a genuine disagreement with the Committee over the information provided.”

“We had hoped that we could work with the Committee to have an open and constructive dialogue about the complicated nature of doing business in China.”

“All businesses interacting with China face difficult questions of how to best balance the democratizing forces of open commerce and free expression with the very real challenges of operating in countries that restrict access to information. This challenge is particularly acute for technology and communication companies such as Yahoo!.”
“As we have made clear to Chairman Lantos and the Committee on Foreign Affairs, Yahoo! has treated these issues with the gravity and attention they demand. We are engaged in a multi-stakeholder process with other companies and the human rights community to develop a global code of conduct for operating in countries around the world, including China. We are also actively engaged with the Department of State to assist and encourage the government’s efforts to deal with these issues on a diplomatic level.”

“We believe the answers to these broad and complex questions require a constructive dialogue with all stakeholders engaged in a collaborative manner. It is our hope that the Committee will approach the hearing in that same constructive spirit.”

I can understand why Yahoo! is claiming that they are being treated unfairly. Yahoo! has been the company that has been most tarred, in some ways, for a problem that is industry-wide, and should be resolved on an industry-wide (or broader, such as law or international law) basis. Yahoo! has been a very constructive player in the ongoing effort to come up with a code of conduct for companies in this position (along with Google, Microsoft, and others). And Yahoo! has been working hard to establish internal practices to head off similar situations and voicing its concern about Chinese policies in this arena. Their efforts since the Shi Tao case on this front have been laudable.

But if in fact the company knew more — even a little bit more — about why the Chinese police came knocking for Shi Tao than what Mr. Callahan led all of us to believe, (“no information”), then it is a big problem. Unless there are facts that I’m missing, for the Congress to call Yahoo! back to Capitol Hill to correct the record, in public, is completely appropriate, if “no information” is not what we were meant to understand. It may well be that what the company knew was in fact so vague, as many legal terms are in China, as to be inclusive. It may well be that someone in the company knew, but the right people didn’t know — and that an internal process was flawed in this case. But those are very different discussions, ones we should have, than the straight-up problem that the company didn’t have context for the request.

Because I respect many of the people working hard on this issue within Yahoo!, and credit that Jerry Yang is very well-meaning on this topic, I’ve been willing to give Yahoo! a big benefit of the doubt. After all, a key part of our own legal system — as part of a rule of law that we’ve come to trust here — calls on us to do so. The big problem here for me is if we’ve in fact been misled, all of us, to believe that it was one problem when it really was quite another. If “no information” proves to be inaccurate, I’m not sure how much longer I can keep extending that benefit of the doubt in this case.

(The Merc’s Frank Davies wrote up the story here, among a few hundred others in the last 24 hours. Rebecca MacKinnon, of course, had the story months before (also here) and said already much what I’ve said here.)

Three Conversations on Intellectual Property: Fordham, University of St. Gallen, UOC (Catalunya)

Three recent conversations I’ve been part of offered a contrast in styles and views on intellectual property rights across the Atlantic. First, the Fordham International IP conference, which Prof. Hugh Hanson puts on each year (in New York, NY, USA); the terrific classes in Law and Economics of Intellectual Property that Prof. Urs Gasser teaches at our partner institution, the University of St. Gallen (in St. Gallen, Switzerland); and finally, today, the Third Congress on Internet, Law & Politics held by the Open University of Catalonia (in Barcelona, Spain), hosted by Raquel Xalabarder and her colleagues.

* * *

Fordham (1)

At Fordham, Jane Ginsburg of Columbia Law School moderated one of the panels. We were asked to talk about the future of copyright. One of the futures that she posited might come into being — and for which Fred von Lohmann and I were supposed to argue — was an increasingly consumer-oriented copyright regime, perhaps even one that is maximally consumer-focused.

– For starters, I am not sure that “consumer” maximalization is the way to think about it. The point is that it’s the group that used to be called the consumers who are now not just consumers but also creators. It’s the maximization of the rights of all creators, including re-creators, in addition to consumers (those who benefit, I suppose, from experiencing what is in the “public domain”). This case for a new, digitally-inspired balance has been made best by Prof. Lessig in Free Culture and by many others.

– What are the problems with what one might consider a maximalized consumer focus? The interesting and hardest part has to do with moral rights. Prof. Ginsburg is right: this is a very hard problem. I think that’s where the rub comes.

– The panel agreed on one thing: a fight over compulsory licensing is certainly coming. Most argued that the digital world, particularly a Web 2.0 digital world, will lead us toward some form of collective, non-exclusive licensing solution — if not a compulsory licensing scheme — will emerge over time.

– “Copyright will be a part of social policy. We will move away from seeing copyright as a form of property,” says Tilman Luder, head of copyright at the directorate general for internal markets at the competition division of the European Commission. At least, he says, that’s the trend in copyright policy in Europe.

* * *

Fordham (2)

I was also on the panel entitled “Unauthorized Use of Works on the Web: What Can be Done? What Should be Done?”

– The first point is that “unauthorized use of works” doesn’t seem quite the relevant frame. There are lots of unauthorized uses of works on the web that are perfectly lawful and present no issue at all: use of works not subject to copyright, re-use where an exception applies (fair use, implied license, the TEACH Act, e.g.s), and so forth. These uses are relevant to the discussion still, though: these are the types of uses that are

– In the narrower frame of unauthorized uses, I think there are a lot of things that can be done.

– The first and most important is to work toward a more accountable Internet. People who today are violating copyright and undermining the ability of creators to make a living off of their creative works need to change. Some of this might well be done in schools, through copyright-related education. The idea should be to put young people in the position of being a creator, so they can see the tensions involved: being the re-user of some works of others, and being the creator of new works, which others may in turn use.

– A second thing is continued work on licensing schemes. Creative Commons is extraordinary. We should invest more in it, build extensions to it, and support those who are extending it on a global level (including in Catalunya!).

– A third thing, along the lines of what Pat Aufderheide and Peter Jaszi are doing with filmmakers, is to establish best practices for industries that rely on ideas like fair use.

– A fourth thing is to consider giving more definition to the unarticulated rights — not the exclusive rights of authors that we well understand, but the rights of those who would re-use them, to exceptions and limitations.

– A fifth area, and likely the discussion that will dominate this panel, is to consider the role of intermediaries. This is a big issue, if not the key issue, in most issues that crop up across the Internet. Joel Reidenberg of Fordham Law School has written a great deal on this cluster of issues of control and liability and responsibility. The CDA Section 230 in the defamation context raises this issue as well. The question of course arose in the Napster, Aimster, and Grokster contexts. Don Verrilli and Alex Macgillivray argued this topic in the YouTube/Viacom context — the topic on which sparks most dramatically flew. They fought over whether Google was offering the “claim your content” technology to all comers or just to those with whom Google has deals (Verilli argued the latter, Macgillivray the former) and whether an intermediary could really know, in many instances, whether a work is subject to copyright without being told by the creators (Verilli said that wasn’t the issue in this case, Macgillivray says it’s exactly the issue, and you can’t tell in so many cases that DMCA 512 compliance should be the end of the story).

* * *

St. Gallen

Across the Atlantic, Prof. Dr. Urs Gasser and his teaching and research teams at the University of St. Gallen are having a parallel conversation. Urs is teaching a course on the Law and Economics of Intellectual Property to graduate students in law at St. Gallen. He kindly invited me to come teach with him and his colleague Prof. Dr. Bead Schmid last week.

– The copyright discussion took up many of the same topics that the Fordham panelists and audience members were struggling with. The classroom in Switzerland seemed to split between those who took a straight market-based view of the topics generally and those who came at it from a free culture perspective.

– I took away from this all-day class a sense that there’s quite a different set of experiences among Swiss graduate students , as compared to US graduate students, related to user-generated content and the creation of digital identity. The examples I used in a presentation of what Digital Natives mean for copyright looking ahead — Facebook, MySpace, LiveJournal, Flickr, YouTube, and so forth — didn’t particularly resonate. I should have expected this outcome, given the fact that these are not just US-based services, but also in English.

– The conversation focused instead on how to address the problem of copyright on the Internet looking forward. The group had read Benkler, Posner and Shavell in addition to a group of European writers on digital law and culture. One hard problem buried in the conversation: how much help can the traditional Law and Economics approach help in analyzing what to do with respect to copyright from a policy perspective? Generally, the group seeemed to believe that Law and Economics could help a great deal, on some levels, though 1) the different drivers that are pushing Internet-based creativity — other than straight economic gains — and 2) the extent to which peer-production prompts benefits in terms of innovation make it tricky to put together an Excel spreadsheet to analyze costs and benefits of a given regulation. I left that room thinking that a Word document might be more likely to work, with inputs from the spreadsheet.

* * *

Barcelona

The UOC is hosting its third Congres Internet i Politica: Noves Perspectives in Barcelona today. JZ is the keynoter, giving the latest version of The Future of the Internet — and How to Stop It. The speech just keeps getting better and better as the corresponding book nears publication. He’s worked in more from StopBadware and the OpenNet Initiative and a new slide on the pattern of Generativity near the end. If you haven’t heard the presentation in a while, you’ll be wowed anew when you do.

– Jordi Bosch, the Secretary-General of the Information Society of Catalonia, calls for respect for two systems: full copyright and open systems that build upon copyright.

Prof. Lilian Edwards of the University of Southhampton spoke on the ISP liability panel, along with Raquel Xalabarder and Miquel Peguera. Prof. Edwards talked about an empirical research project on the formerly-called BT Cleanfeed project. BT implements the IWF’s list of sites to be blocked, in her words a blacklist without a set appeals process. According to Prof. Edwards’ slides, the UK government “have made it plain that if all UK ISPs do not adopt ‘Cleanfeed’ by end 2007 then legislation will mandate it.” (She cites to Hansard, June 2006 and Gower Report.) She points to the problem that there’s no debate about the widespread implementation of this blacklist and no particular accountability for what’s on this blacklist and how it is implemented.

– Prof. Edwards’ story has big implications for not just copyright, but also the StopBadware (regarding block lists and how to run a fair and transparent appeals process) and ONI (regarding Internet filtering and how it works) research projects we’re working on. Prof. Edwards’ conclusion, though, was upbeat: the ISPs she’s interviewed had a clear sense of corporate social responsibility, which might map to helping to keep the Internet broadly open.

For much better coverage than mine, including photographs, scoot over to ICTology.

Professor Mary Wong on Intellectual Property Rights and Rhetoric, with Nesson as Interlocutor

Professor Mary Wong of Franklin Pierce Law Center is here today at the Berkman Center. Mary’s talk is a series of provocations about language. She’s taking on the trope of the individual author. She is of the “dualist school,” that there’s a minor, but existing solution to do more with natural rights-type reasoning than the United States utilitarian framework that undergirds our IPR system.

Professor Charles Nesson, the Berkman Center’s founder, who thinks a lot about the rhetorical frame, put it nicely: Mary honed in on both the stability and the fluidity of the rhetoric around intellectual property rights.

The single greatest problem in US law in this area, Charlie says, is that the burden of proof in fair use falls on the person re-using the work, not on the person asserts her or his underlying right.

What could we do, Charlie asks? We could think about universities as a client, and law reform as our tactic. What if we were to take up as a cause a shifting of the burden of proof in the fair use context.

Ethan Zuckerman pushed back on Mary’s suggestion that the Universal Declaration of Human Rights might be a good model in terms of language for reframing of the rhetoric around IPR. It’s a shaky foundation, EZ argues, kind of like trying to build community in Palestine. EZ says that Mary is spending her time in the aspirational zone, not in the real world. What is it that we actually do, EZ wants to know? In universities, we find texts we like and xerox them and give them to students, for instance (not at the Berkman Center, of course, but…). We should work from there and try to get to a legal regime that works, says EZ.

If you’ve missed her talk in real-time, please find it at MediaBerkman.