I’m with our friends at the Yale Information Society Project today for a fine conference called the Open Standards International Symposium. Eddan Katz and company have assembled a group from many of the places around the world where this issue is raging, along with representatives of many of the key industry players and stakeholders, like CDT. My notes for the Law panel are here. My focus is on the relationship between open standards and interoperability.
Later this week, the Berkman Center heads west to San Francisco. We’re hosting an unconference on Mobile Identity, led by fellows Doc Searls, John Clippinger, Mary Rundle, Urs Gasser and others. It’s free and open, but you should sign up if you’d like to come, as space is limited. CNet is kindly hosting us. We’re also planning an informal reception for Berkman Center alums and friends; let one of us know if you’ll be in San Francisco on Friday night and we’ll ping you an invite.
Today, the Berkman Center joins Urs Gasser and all our friends from the University of St. Gallen in hosting a workshop on interoperability and innovation, in Weissbad, Switzerland. We are in the company of an interesting, eclectic group of technologists, academics, and NGOs leaders. The briefing papers are online.
This workshop is one in a series of such small-group conversations intended both to foster discussion and to inform our own work in this area of interoperability and its relationship to innovation in the field that we study. This is among the hardest, most complex topics that I’ve ever taken up in a serious way.
As with many of the other interesting topics in our field, interop makes clear the difficulty of truly understanding what is going on without having 1) skill in a variety of disciplines, or, absent a super-person who has all these skills in one mind, an interdisciplinary group of people who can bring these skills to bear together; 2) knowledge of multiple factual settings; and 3) perspectives from different places and cultures. While we’ve committed to a transatlantic dialogue on this topic, we realize that even in so doing we are still ignoring the vast majority of the world, where people no doubt also have something to say about interop. This need for breadth and depth is at once fascinating and painful.
In addition to calling for an interdisciplinary and international group of researchers or research inputs, there is no way to talk about interop in a purely abstract way: interop makes sense conceptually online in the context of a set of facts. We’ve decided, for starters, to focus on digital media (DRM interop in the music space in particular); digital identity; and a third primary case (which may be e-Communications, web services, and office applications). One of our goals in this research is to integrate our previous work on digital media, digital ID, and web 2.0 and so forth into this cross-cutting topic of interop.
Another thing is quite clear, as stated most plainly and eloquently by Prof. Francois Leveque of the Ecole des Mines: we need to acknowledge what we do not know, and we really do not know — empirically — to what extent interop has an impact on innovation. A major thrust of our work is to try to establish models of analysis that might help, in varying factual circumstances, in the absence of empirical data as to the costs and benefits of a certain regulatory decision.
This research effort is supported primarily by a gift from Microsoft (as always in our work with corporate sponsors, this gift is unrestricted and mixed with other such unrestricted funds, as well as our core funding from various sources, to mitigate the risk that we are influenced in our work by virtue of sponsorship). We have been blessed by our partners in industry, including many at Microsoft from the Legal and Corporate Affairs group, led by Annemarie Levins on this project, by their willingness to share with us an in-depth view of their work across a range of areas on interop. We’ve also been supported by the input from technologists at IBM and Intel in this event, and many other firms, through our interviewing process. We’d love to hear from other industry, and non-industry, players with an interest in this field.
In big interoperability news, Microsoft and Novell have entered into a deal to work together. Those are some interesting bedfellows. Much to unpack and understand.
One insight, from ArsTechnica’s report: “From Microsoft’s standpoint, virtualization is a good thing, especially when Windows is the host operating system. A close linkage between Microsoft and Novell reinforces Microsoft’s message to corporate types that Microsoft’s Windows Server and Virtual Server products are serious players, no matter what your mix of operating systems is.”
A copy of an announcement letter, which I received by e-mail, also reads in relevant part: “More importantly, Microsoft announced today that it will not assert its patents against individual, non-commercial developers. Novell has secured an irrevocable promise from Microsoft to allow individual and non-commercial contributors the freedom to continue open source development, free from any concern of Microsoft patent lawsuits. That’s right, Microsoft wants you to keep hacking.”
It brings to mind Bill Gates’ executive e-mail about interoperability by design in software development at Microsoft.
(For a few examples: don’t miss Fred von Lohmann as interviewed by John Battelle. Declan McCullagh and Anne Broache have an extensive piece highlighting the continuing uncertainty in the digital copyright space and quoting experts like Jessica Litman. Steve Ballmer brings it up in his BusinessWeek interview on the deal, asking, “And what about the rights holders?” And the enormously clever Daniel Hausermann has an amusing take on his new blog.)
My view (in large measure reflected in the WSJ here, in a discussion with Prof. Stan Liebowitz) is that Google is taking on some, but not all that much, copyright risk in its acquisition of YouTube. Google has already proven its mettle in terms of offering services that bring with them a reasonably high appetite for copyright risk: witness the lawsuits filed by the likes of the publishing industry at large; the pornographer Perfect 10; and Agence France Presse. There’s no doubt that Google will have to respond to challenges on both secondary copyright liability and direct copyright liability as a result of this acquisition. If they are diligent and follow the advice of their (truly) brilliant legal team, I think Google should be able to withstand these challenges as a matter of law.
The issue that pops back out the other side of this flurry of interest in the broader question of the continued uncertainty with respect to digital copyright. Despite what I happen to consider a reasonably good case in Google’s favor on these particular facts (so far as I know them), there is an extraordinary amount of uncertainty as a general matter on digital copyright issues in general. Mark Cuban’s couple of posts on this topic are particularly worth reading; there are dozens of others.
Many business models in the Web 2.0 industry in particular hinge on the outcome of this uncertainty. A VC has long written about “the rights issues” at the core of many businesses that are built, or will be built, on what may be the sand — or what may turn out to be a sound foundation — of “micro-chunked” content. Lawrence Lessig has written the most definitive work on this topic, especially in the form of his book, Free Culture. The RSS-and-copyright debate is one additional angle on this topic. Creative Commons licenses can help to clarify the rights associated with micro-chunked works embedded in, or syndicated via, RSS feeds.
Part of the answer could come from the courts and the legislatures of the world. But I’m not holding my breath. A large number of lawsuits in the music and movies context has left us clearer in terms of our understanding of the rules around file-sharing, but not enough clarity such that the next generation of issues (including those to which YouTube and other web 2.0 applications give rise) is well-sorted.
Another part of the answer to this digital copyright issue might be provided by the market. One might imagine a process by which citizens who create user-generated content (think of a single YouTube video file or a syndicated vlog series, a podcast audio file or series of podcasts, a single online essay or a syndicated blog, a photo covering the perfectly captures a breaking news story or a series of evocative images, and so forth) might consistently adopt a default license (one of the CC licenses, or an “interoperable” license that enables another form of commercial distribution; I am persuaded that as much interoperability of licenses as possible is essential here) for all content that they create, with the ability also to adopt a separate license for an individual work that they may create in the future.
In addition to choosing this license (or these licenses) for their work, these users registered this work or these works, with licenses attached, in a central repository. Those who wished to reproduce these works would be on notice to check this repository, ideally through a very simple interface (possibly “machine-readable” as well as “human-readable” and “lawyer-readable,” to use the CC language), to determine the terms on which the creator is willing to enable the work to be reproduced (though not affecting in any way the fair use, implied license, or other grounds via which the works might otherwise be reproduced).
Some benefits of such a system:
– It would not affect the existing rights of copyright holders (or the public, for that matter, on the other side of the copyright bargain), but rather ride on top of that system (which might have the ancillary benefit of eventually permitting a global market to emerge, if licenses can be transposed effectively);
– It would allow those who wish to clarify the terms on which they are willing to have their works reproduced to do so in a default manner (i.e., “unless I say otherwise, it’s BY-SA”) but also to carve out some specific works for separate treatment (i.e., “… but for this picture, I am retaining all rights”);
– It might provide a mechanism, supplemental to CC licenses, for handshakes to take place online without lawyers involved;
– It might be coupled with a marketplace for automated licensing — and possibly clearance services — from creators to those who wish to reproduce the works;
– It could be adopted on top of (and in a complementary manner with respect to) other systems, not just the copyright system at large as well as worthy services/aggregators of web 2.0 content, ranging from YouTube, software providers like SixApart, FeedBurner, Federated Media, Brad Feld’s posse of VCs, and so forth; and,
– It would represent a community-oriented creation of a market, which ultimately could support the development of a global market for both sharing and selling of user-generated content.
This system would not have much bearing on the Google/YouTube situation, but it might serve a key role in the development of web 2.0, or of user-generated content in general, and to help avoid a copyright trainwreck.
Microsoft has just unveiled a new commitment not to assert certain rights against people who develop code based on specifications that Microsoft has developed. It’s called the Open Specification Promise. Warning: the announcement itself, at the top of the page, is written in legalese, though probably pretty readable legalese. The FAQs make things a lot clearer for non-lawyer readers.
The upshot of this announcement is that it will hopefully turn out to be a Very Good Thing. Bravo to the lawyers and the policy people who no doubt worked very hard on it; the promise obviously reflects a huge amount of careful and open-minded thinking. The notion is that Microsoft agrees unilaterally not to come after people based on IP rights that the company holds with respect to a series of widely-used web services, such as SOAP and various of its progeny, WSDL, and so forth (all listed mid-way down the announcement page). From a geeky-lawyerly perspective, one of the things I like a lot is the fact that the requirement of availing oneself of the promise is yourself NOT to participate voluntarily in a patent infringement suit related to the same specification — commitments of this sort could help to create an anti-patent-thicket. (Maybe, down the road, this aspect of the promise might not prove to be as great as I think it could be, but for now, from here, it looks very appealing, in a detente kind of way.)
Why could this promise help? Any promise of forbearance by a huge player — where they say they won’t stand in the way of your innovating on top of the work of others — is certainly positive. More than that, such a promise that is made “irrevocably” establishes a commitment on the part of the company for the long haul. Set aside the legal enforceability of such a promise, the idea has enormous rhetorical force and would make it very hard for the company to backtrack and to go in another direction. Of course, the idea no doubt has good business judgment behind it in an era of dramatic growth in terms of the open development of web services, including those related to security and to web 2.0 apps.
Why might it not be so great? Well, I think it is a great thing, and not just because we at the Berkman Center have been looking into interoperability, with support from Microsoft and others, and learning more about how companies are taking novel steps in this sort of direction. Its limitation might take a few forms, I suppose. The promise itself has limitations — it applies to some specifications and the promise extends only to some possible IPR-related claims, of course, but that seems natural, especially with such a first step. Other possible limitations: 1) Will developers pay attention to it, and in fact believe it? 2) Will this promise itself be interoperable with other such promises? I am reminded of Prof. Lessig’s speech at Wikimania last month, when he talked about interoperable licenses. Hopefully, others will either follow this lead or help developers to understand how this meshes with other similar promises of forebearance in the marketplace. 3) I don’t know well enough whether these are the right specifications to be included in such a promise. Are there other specs that developers would like to see opened up in this fashion?
Lawrence Lessig is giving a rousing lecture right now to a standing-room-only crowd in Ames Courtroom at Harvard Law School. It’s a plenary session of Wikimania 2006. He is in his element. It’s amazing to feel the energy in this room — unconveyable by blog or any other Internet-borne medium, but very very real.
Interoperability, he’s saying, is the key to the story — the Free Culture story — of which Wikipedia is such an illustrative chapter. The instinct to control a platform that you give (or sell) to other people is understandable, but it is also stupid. There needs to be interoperability and free standards that provide the widest range of freedoms for human beings to build upon the platform (sounds a lot like JZ’s Generativity).
We need to remember this lesson as we build a free culture. But we also need to make it possible for this platform to enable people to participate in a free culture. We need also to support the work of the Free Sofware Foundation and work toward free CODECs to allow content to flow across various platforms.
But we need to move past the technical layer, and enable a platform at the legal layer, too, one that protects free culture. The CC movement is an important piece of the story.
Yochai Benkler’s extraordinary book oozes with praise for Wikimedia. You are the central element, the central example, of Yochai’s wonderful argument. It is out of praise for all Wikimaniacs that Larry got on a plane at midnight, he says.
He’s also got a plea for everyone at Wikimania 2006: enable free culture, generally. There are two ways, he says, to do that:
1) Help others to spread the practice with your extraordinary example. There’s a CC/Wikimedia project — PDWiki — to help do this. It will put works in the hands of Canadians in digital form. Beyond demonstrating what you can do with works, it will help to establish what’s in the public domain and what’s not.
2) Demand a user platform for freedom. It came from a conversation with Jimbo Wales; they were drinking awful coffee in Europe. The problem was a lack of interoperability among islands of free cultures. We need interoperability among licenses that are allowing you to do the same thing with the content. We need to support an ecology of different efforts seeking to achieve the same functional outcomes — just as the original web was architected, only this time for cultural works, for content, not for code.
The way it work work is not that CC would have control, but rather that Eben Moglen’s Software Freedom Law Center would be in charge of running the federation of free licenses. The outcome should be that you can say: Derivatives of works under this license can be used under other equivalent licenses.
If we do not solve this problem now, we will face an ecological problem. These islands of free culture will never become anything but silos. We could do good here; we should do good here. Keep practicing the same kind of Wikimaniacal citizenship, he urges, that you’ve practiced to date, and get others to join you.
[Loads of applause.]
* * *
Elsewhere: CNet picks up the event itself as well as a wiki-photo-stream. Artsy, and nice. And Martin LaMonica has covered Lessig’s talk.
Dan Bricklin, David Isenberg, David Weinberger, Dave Winer, Doc Searls, Mitch Kapor, Wendy Seltzer, Yochai Benkler, many other great people are in the room. An old-home week for Berkman Center.