A SIMPLE MANIFESTO FOR A SANE INTERNET We Can Pay For What We Use This one's simple: while advertising may have been an ingenious solution to selling paper news and broadcast TV, its parasitic ugliness is no longer needed in the Internet age. We can pay for what we use! Relying on advertisers to fund services as diverse as messaging, social media, journalism, creative arts, recreation, etc. is guaranteed to waste tremendous amounts of money and human capital, and create perverse incentives that send our most innovative workers down the wrong alleys for decades. Micropayments could make this all go away by allowing Internet users to make thoughtful choices about what's valuable to them, and let simple microeconomics take care of the rest. There is No Such Thing as Intellectual Property Digital computing technology has made it clear to us that data are numbers, and as we all know no human being can own a number. Data doesn't have the same kind of exclusivity that we normally mean when we say "property" -- it can be infinitely reproduced and shared without dilution. Let's let a better writer fill out the discussion here: http://www.guardian.co.uk/technology/2008/feb/21/intellectual.property Now hardly anyone suggests that it actually makes *sense* to use the property metaphor for data; the rebuttal forms instead around the fear that if we don't use a property metaphor in moral and legal arguments, we risk having to give everything away and thereby stifle innovation because innovators will lose their self-interested incentive to produce. Innovation in ideas and data systems is an important part of our knowledge-based economy, no doubt, and if research indicates that a limited copyright (7-14 years?) is useful in spurring innovation (and I think most people agree on this) there's no reason we can't enforce one, while still being honest about what data is. The core difference is that a copyright oriented around the idea that an inventor owns an idea and has a legal right to control it is inconsistent with the reality of data (and leads to longer and longer copyright terms, history shows, as capital fights against the intrusion on its "property" rights). A copyright seen instead as a license from the commons to an individual who gets to exclusively benefit for a limited time from the use of a certain number in a certain context still spurs discovery, but it also preserves a sane concept of information. If we want to do this right, research can find us an optimal enough copyright term, and we can learn to think about copyright as a limited license from the commons to the lucky folks who discover particularly useful numbers, and respect and celebrate the gift of all of it: the invention/discovery, the innovation it spurs and the freedom we all get when the term expires. Decentral/Distributed Networks (Dumb In The Middle, Smart at the Edges) Connect People Best The modern history of computing devices is routinely visited by a flip/flop in the location of their brains. Early computers were the size of a building, with one control panel operated by one technician, the original version of the computer as a single closed unit with a single interface. As they grew more powerful, these systems' descendents could be time-shared to multiple terminals at once, giving birth to the idea of the computer as a distant and shared resource, accessed through many copies of a simple interface. As they grew even more powerful, computers shrank to the size of a large box, enabling the personal computer revolution (Imagine, a whole computer right on your desk!) and the brains were back with the user. In the late 1990s the enterprise technology revolution was going to be "thin clients": why not cut costs by buying a single very powerful PC server (back to the closet), sharing its resources with the cheap, dumbed-down terminals on your employees' desks. Meanwhile laptops and smartphones were replacing desktops, putting their more powerful and smaller brains closer to the user than ever before. And now every website feels pressure to become an application, as Internet users are encouraged to view their own computer as merely a way of running a web browser to connect them to their real computer, a fraction of Google's latest data center halfway across the world. Once the world's computers are networked together, this tendency to flip/flop becomes the question: should the network be smart in the middle or at the edges? Should the Internet look most like a massive spoked wheel, where your connection to the middle is everything because everything happens at the middle, or like a massive web of connections governed by the agents at the edges? Centralization, control, convenience, freedom, privacy, and security all seem to be dancing back and forth as Moore's law constantly changes the computer hardware landscape and global capitalism pushes innovation and consumers in whichever direction is most profitable in the short term. In the earliest days of the Internet, humans created ingenious distributed protocols (e.g., DNS, SMTP) to ensure that everybody could maintain democratic control of their own sites and even write their own ways of doing things, but they'd still all be able to interoperate. Now capital wants us all to use centralized web services and the Internet is at risk of turning into something that looks less like a democratic medium and more like a cable TV network. Consider the most recent innovation, the tablet computer. With CPUs faster than the gigantic enterprise servers of 10 years ago, storage that used to require libraries, huge high-resolution touchscreens (or will it be immersive goggles or electronic paper that win the interface?), super-accurate location technologies, and pervasive network connections, these tablets are nearly (if not already) capable of being the tiny pervasive personal computer agents imagined in the earliest visions of cyberspace. Yet they're still being used to access mostly closed and limited content in closed and limited ways, and no matter how many new information flows and economic models they enable, still frustration simmers in the average consumer and the development community. Consider the most recent innovations in social media (Facebook and Twitter) and Google, the most recent innovator in everything. Their growth triggers such excitement! Their platforms create new communiciations protocols, replacing person-to-person protocols like phone calls or email or texts for a new generation of communicators. Then their failures trigger such anger: as issues about privacy and data ownership come to the fore; when users experience outages in what's now a critical communications channel. How anyone can be surprised that their content isn't private or that their communications aren't 100% reliable is itself quite surprising, when they after all know that they're using a single centralized service owned and operated by a single company. When Twitter truly came of age in facilitating the Arab Spring's protests, how many noted that a service hosted by a single company outside one's nation's borders might not be the best tool for building democracy against a central dictator? This generation of services deliver on a certain promise, but they're still charming toys that fail spectacularly compared to what they could become. It's sadly nearly indisuptable that innovation is fastest when we use the tool of capitalism, and investors seem to love putting money into the centralized protocols, services, and systems which are easiest to understand and control. But eventually we may want to look back and remember the revolutionary power of the Internet in connecting people on their own terms, understand how decentral systems best respect our natural notions of communication, identity, agency, and privacy. Let's make sure we're crafting what is after all becoming the fabric of our relationships in a way that allows us to be the active participants we want to be, rather than the dumb consumers that raw capitalism creates. because Human Identity too is Decentral and Distributed In the 1970s, online identity was pretty simple; everyone had a login account like firstname@computer.facility.edu and an email address like firstname.lastname@facility.edu, and everyone just assumed that those addresses had a one-to-one correspondence with real people at real organizations, and that was that. The Internet was a great place to communicate transparently with other people, extending your offline relationships. With the rise of spam and other breaks with the earlier social contract, the Internet's entry into the commercial and popular mainstream, and its identification with the more shadowy "cyberspace", it became clear that the Internet could also be a great place to lie, a great place to hide, and a space where one could create relationships that couldn't have existed offline. Online identity came to be dominated by concepts related to privacy, anonymity, and alternate identity creation: "On the Internet, nobody knows you're a dog". And after the turn of the century, all of these older conceptions survived and were further enhanced by the rise of social networking, where explicitly social databases and communications protocols which dominated online identity enabled and encouraged Internet users to self-represent, creating new meanings and identifications in their online representations. Having reached the apex of this pyramid, we inhabit a simultaneously overdetermined and fractured concept of online identity. We can choose from a variety of centralized and restrictive intermediaries against which to authenticate (John Smith on Facebook, jsmith on Twitter, john.smith@gmail.com, etc.); when we then interact with Rebecca Diaz on Facebook, should we take the attitude that we're connecting with our friend Rebecca just as if she were sitting nearby (70s), or should we take pains to be cognizant of the medium's capacity for deception, fraud, and identity creation--it might be someone masquerading as Rebecca--(80s/90s), or do we need to take the even more nuanced critical view that Rebecca is creating an online projection of herself that suggests who she wants to be more than who she is, and what do *we* want to project in this context? These conceptual tensions may seem abstract, but they can become very concrete when we have to decide what mix of social networks to identify via, when we forget a password at a particular identity intermediary and have to try to prove who we "are" to get it back, when we're taken in by spam or phishing, when we find that we've left our laptop logged into Facebook and our little sister has masqueraded her way into an embarassing status update to all our friends, when our Facebook account is hacked and abused for more nefarious purposes, or when we're the victim of even more serious online identity fraud. What an overload! There are certainly folks who enjoy inhabiting the upper "play" levels of online identity, and we don't need to destroy those, but as the boundary between offline and online blurs more and more of our real communications and transactions are occuring on the Internet, and this is going to require online identity tools which hearken back to the 70s one-to-one correspondence with our offline human identities. We're going to need to develop social & identity protocols which govern how we identify ourselves and interact online by extending our natural pre-Internet identities and relationships (with each other, commercial agents, governments). They'll need to be at once decentral and distributed (like our own selves and relationships) yet reliable and trusted (like the latest cryptographic encryption technology).