Saturday, November 12, 2011

Cognitive Surplus _Creativity and Generosity in a Connected Age - Clay Shirky - 9781594202537

[caption id="attachment_755" align="alignnone" width="236" caption="Cognitive Surplus _Creativity and Generosity in a Connected Age - Clay Shirky - 9781594202537"]Cognitive Surplus _Creativity and Generosity in a Connected Age - Clay Shirky - 9781594202537[/caption]


DESCRIPTION

The author of the breakout hit Here Comes Everybody reveals how new technology is changing us from consumers to collaborators, unleashing a torrent of creative production that will transform our world.

For decades, technology encouraged people to squander their time and intellect as passive consumers. Today, tech has finally caught up with human potential. In Cognitive Surplus, Internet guru Clay Shirky forecasts the thrilling changes we will all enjoy as new digital technology puts our untapped resources of talent and goodwill to use at last.

Since we Americans were suburbanized and educated by the postwar boom, we've had a surfeit of intellect, energy, and time-what Shirky calls a cognitive surplus. But this abundance had little impact on the common good because television consumed the lion's share of it-and we consume TV passively, in isolation from one another. Now, for the first time, people are embracing new media that allow us to pool our efforts at vanishingly low cost. The results of this aggregated effort range from mind expanding-reference tools like Wikipedia-to lifesaving-such as Ushahidi.com, which has allowed Kenyans to sidestep government censorship and report on acts of violence in real time.

Shirky argues persuasively that this cognitive surplus-rather than being some strange new departure from normal behavior-actually returns our society to forms of collaboration that were natural to us up through the early twentieth century. He also charts the vast effects that our cognitive surplus-aided by new technologies-will have on twenty-first-century society, and how we can best exploit those effects. Shirky envisions an era of lower creative quality on average but greater innovation, an increase in transparency in all areas of society, and a dramatic rise in productivity that will transform our civilization.

The potential impact of cognitive surplus is enormous. As Shirky points out, Wikipedia was built out of roughly 1 percent of the man-hours that Americans spend watching TV every year. Wikipedia and other current products of cognitive surplus are only the iceberg's tip. Shirky shows how society and our daily lives will be improved dramatically as we learn to exploit our goodwill and free time like never before.

About the Author

Clay Shirky teaches at the Interactive Telecommunications Program at New York University, where he researches the interrelated effects of our social and technological networks. He has consulted with a variety of groups working on network design, including Nokia, the BBC, Newscorp, Microsoft, BP, Global Business Network, the Library of Congress, the U.S. Navy, the Libyan government, and Lego(r). His writings have appeared in The New York Times, The Wall Street Journal, The Times (of London), Harvard Business Review, Business 2.0, and Wired.


PREVIEW

[lyrics]
1

Gin, Television, and Cognitive Surplus

In the 1720s, London was busy getting drunk. Really drunk. The city was in the grips of a gin-drinking binge, largely driven by new arrivals from the countryside in search of work. The characteristics of gin were attractive: fermented with grain that could be bought locally, packing a kick greater than that of beer, and considerably less expensive than imported wine, gin became a kind of anesthetic for the burgeoning population enduring profound new stresses of urban life. These stresses generated new behaviors, including what came to be called the Gin Craze.

Gin pushcarts plied the streets of London; if you couldn’t afford a whole glass, you could buy a gin-soaked rag, and flop-houses did brisk business renting straw pallets by the hour if you needed to sleep off the effects. It was a kind of social lubricant for people suddenly tipped into an unfamiliar and often unforgiving life, keeping them from completely falling apart. Gin offered its consumers the ability to fall apart a little bit at a time. It was a collective bender, at civic scale.

The Gin Craze was a real event—gin consumption rose dramatically in the early 1700s, even as consumption of beer and wine remained flat. It was also a change in perception. England’s wealthy and titled were increasingly alarmed by what they saw in the streets of London. The population was growing at a historically unprecedented rate, with predictable effects on living conditions and public health, and crime of all sorts was on the rise. Especially upsetting was that the women of London had taken to drinking gin, often gathering in mixed-sex gin halls, proof positive of its corrosive effects on social norms.

It isn’t hard to figure out why people were drinking gin. It is palatable and intoxicating, a winning combination, especially when a chaotic world can make sobriety seem overrated. Gin drinking provided a coping mechanism for people suddenly thrown together in the early decades of the industrial age, making it an urban phenomenon, especially concentrated in London. London was the site of the biggest influx of population as a result of industrialization. From the mid-1600s to the mid-1700s, the population of London grew two and a half times as fast as the overall population of England; by 1750, one English citizen in ten lived there, up from one in twenty-five a century earlier.

Industrialization didn’t just create new ways of working, it created new ways of living, because the relocation of the population destroyed ancient habits common to country living, while drawing so many people together that the new density of the population broke the older urban models as well. In an attempt to restore London’s preindustrial norms, Parliament seized on gin. Starting in the late 1720s, and continuing over the next three decades, it passed law after law prohibiting various aspects of gin’s production, consumption, or sale. This strategy was ineffective, to put it mildly. The result was instead a thirty-year cat-and-mouse game of legislation to prevent gin consumption, followed by the rapid invention of ways to defeat those laws. Parliament outlawed “flavored spirits”; so distillers stopped adding juniper berries to the liquor. Selling gin was made illegal; women sold from bottles hidden beneath their skirts, and some entrepreneurial types created the “puss and mew,” a cabinet set on the streets where a customer could approach and, if they knew the password, hand their money to the vendor hidden inside and receive a dram of gin in return.

What made the craze subside wasn’t any set of laws. Gin consumption was treated as the problem to be solved, when it fact it was a reaction to the real problem—dramatic social change and the inability of older civic models to adapt. What helped the Gin Craze subside was the restructuring of society around the new urban realities created by London’s incredible social density, a restructuring that turned London into what we’d recognize as a modern city, one of the first. Many of the institutions we mean when we talk about “the industrialized world” actually arose in response to the social climate created by industrialization, rather than to industrialization itself. Mutual aid societies provided shared management of risk outside the traditional ties of kin and church. The spread of coffeehouses and later restaurants was spurred by concentrated populations. Political parties began to recruit the urban poor and to field candidates more responsive to them. These changes came about only when civic density stopped being treated as a crisis and started being treated as a simple fact, even an opportunity. Gin consumption, driven upward in part by people anesthetizing themselves against the horrors of city life, started falling, in part because the new social structures mitigated these horrors. The increase in both population and aggregate wealth made it possible to invent new kinds of institutions; instead of madding crowds, the architects of the new society saw a civic surplus, created as a side effect of industrialization.

And what of us? What of our historical generation? That section of the global population we still sometimes refer to as “the industrialized world” has actually been transitioning to a postindustrial form for some time. The postwar trends of emptying rural populations, urban growth, and increased suburban density, accompanied by rising educational attainment across almost all demographic groups, have marked a huge increase in the number of people paid to think or talk, rather than to produce or transport objects. During this transition, what has been our gin, the critical lubricant that eased our transition from one kind of society to another?

The sitcom. Watching sitcoms—and soap operas, costume dramas, and the host of other amusements offered by TV—has absorbed the lion’s share of the free time available to the citizens of the developed world.

Since the Second World War, increases in GDP, educational attainment, and life span have forced the industrialized world to grapple with something we’d never had to deal with on a national scale: free time. The amount of unstructured time cumulatively available to the educated population ballooned, both because the educated population itself ballooned, and because that population was living longer while working less. (Segments of the population experienced an upsurge of education and free time before the 1940s, but they tended to be in urban enclaves, and the Great Depression reversed many of the existing trends for both schooling and time off from work.) This change was accompanied by a weakening of traditional uses of that free time as a result of suburbanization—moving out of cities and living far from neighbors—and of periodic relocation as people moved for jobs. The cumulative free time in the postwar United States began to add up to billions of collective hours per year, even as picnics and bowling leagues faded into the past. So what did we do with all that time? Mostly, we watched TV.

We watched I Love Lucy. We watched Gilligan’s Island. We watched Malcolm in the Middle. We watched Desperate Housewives . We had so much free time to burn and so few other appealing ways to burn it that every citizen in the developed world took to watching television as if it were a duty. TV quickly took up the largest chunk of our free time: an average of over twenty hours a week, worldwide. In the history of media, only radio has been as omnipresent, and much radio listening accompanies other activities, like work or travel. For most people most of the time, watching TV is the activity. (Because TV goes in through the eyes as well as the ears, it immobilizes even moderately attentive users, freezing them on chairs and couches, as a prerequisite for consumption.)

The sitcom has been our gin, an infinitely expandable response to the crisis of social transformation, and as with drinking gin, it isn’t hard to explain why people watch individual television programs—some of them are quite good. What’s hard to explain is how, in the space of a generation, watching television became a part-time job for every citizen in the developed world. Toxicologists like to say “The dose makes the poison”; both alcohol and caffeine are fine in moderation but fatal in excess. Similarly, the question of TV isn’t about the content of individual shows but about their volume: the effect on individuals, and on the culture as a whole, comes from the dose. We didn’t just watch good TV or bad TV, we watched everything—sitcoms, soap operas, infomercials, the Home Shopping Network. The decision to watch TV often preceded any concern about what might be on at any given moment. It isn’t what we watch, but how much of it, hour after hour, day after day, year in and year out, over our lifetimes. Someone born in 1960 has watched something like fifty thousand hours of TV already, and may watch another thirty thousand hours before she dies.

This isn’t just an American phenomenon. Since the 1950s, any country with rising GDP has invariably seen a reordering of human affairs; in the whole of the developed world, the three most common activities are now work, sleep, and watching TV. All this is despite considerable evidence that watching that much television is an actual source of unhappiness. In an evocatively titled 2007 study from the Journal of Economic Psychology—“Does Watching TV Make Us Happy?”—the behavioral economists Bruno Frey, Christine Benesch, and Alois Stutzer conclude that not only do unhappy people watch considerably more TV than happy people, but TV watching also pushes aside other activities that are less immediately engaging but can produce longer-term satisfaction. Spending many hours watching TV, on the other hand, is linked to higher material aspirations and to raised anxiety.

The thought that watching all that TV may not be good for us has hardly been unspoken. For the last half century, media critics have been wringing their hands until their palms chafed over the effects of television on society, from Newton Minow’s famous description of TV as a “vast wasteland” to epithets like “idiot box” and “boob tube” to Roald Dahl’s wicked characterization of the television-obsessed Mike Teavee in Charlie and the Chocolate Factory. Despite their vitriol, these complaints have been utterly ineffective—in every year of the last fifty, television watching per capita has grown. We’ve known about the effects of TV on happiness, first anecdotally and later through psychological research, for decades, but that hasn’t curtailed its growth as the dominant use of our free time. Why?

For the same reason that the disapproval of Parliament didn’t reduce the consumption of gin: the dramatic growth in TV viewing wasn’t the problem, it was the reaction to the problem. Humans are social creatures, but the explosion of our surplus of free time coincided with a steady reduction in social capital—our stock of relationships with people we trust and rely on. One clue about the astonishing rise of TV-watching time comes from its displacement of other activities, especially social activities. As Jib Fowles notes in Why Viewers Watch, “Television viewing has come to displace principally (a) other diversions, (b) socializing, and (c) sleep.” One source of television’s negative effects has been the reduction in the amount of human contact, an idea called the social surrogacy hypothesis.

Social surrogacy has two parts. Fowles expresses the first—we have historically watched so much TV that it displaces all other uses of free time, including time with friends and family. The other is that the people we see on television constitute a set of imaginary friends. The psychologists Jaye Derrick and Shira Gabriel of the University at Buffalo and Kurt Hugenberg of Miami University of Ohio concluded that people turn to favored programs when they are feeling lonely, and that they feel less lonely when they are viewing those programs. This shift helps explain how TV became our most embraced optional activity, even at a dose that both correlates with and can cause unhappiness: whatever its disadvantages, it’s better than feeling like you’re alone, even if you actually are. Because watching TV is something you can do alone, while it assuages the feelings of loneliness, it had the right characteristics to become popular as society spread out from dense cities and tightly knit rural communities to the relative disconnection of commuter work patterns and frequent relocations. Once a home has a TV, there is no added cost to watching an additional hour.

Watching TV thus creates something of a treadmill. As Luigino Bruni and Luca Stanca note in “Watching Alone,” a recent paper in the Journal of Economic Behavior and Organization, television viewing plays a key role in crowding-out social activities with solitary ones. Marco Gui and Luca Stanca take on the same phenomenon in their 2009 working paper “Television Viewing, Satisfaction and Happiness”: “television can play a significant role in raising people’s materialism and material aspirations, thus leading individuals to underestimate the relative importance of interpersonal relations for their life satisfaction and, as a consequence, to over-invest in income-producing activ ities and under-invest in relational activities.” Translated from the dry language of economics, underinvesting in relational activities means spending less time with friends and family, precisely because watching a lot of TV leads us to shift more energy to material satisfaction and less to social satisfaction.

Our cumulative decision to commit the largest chunk of our free time to consuming a single medium really hit home for me in 2008, after the publication of Here Comes Everybody, a book I’d written about social media. A TV producer who was trying to decide whether I should come on her show to discuss the book asked me, “What interesting uses of social media are you seeing now?”

I told her about Wikipedia, the collaboratively created encyclopedia, and about the Wikipedia article on Pluto. Back in 2006, Pluto was getting kicked out of the planet club—astronomers had concluded that it wasn’t enough like the other planets to make the cut, so they proposed redefining planet in such a way as to exclude it. As a result, Wikipedia’s Pluto page saw a sudden spike in activity. People furiously edited the article to take account of the proposed change in Pluto’s status, and the most committed group of editors disagreed with one another about how best to characterize the change. During this conversation, they updated the article—contesting sections, sentences, and even word choice throughout—transforming the essence of the article from “Pluto is the ninth planet” to “Pluto is an odd-shaped rock with an odd-shaped orbit at the edge of the solar system.”

I assumed that the producer and I would jump into a conversation about social construction of knowledge, the nature of authority, or any of the other topics that Wikipedia often generates. She didn’t ask any of those questions, though. Instead, she sighed and said, “Where do people find the time?” Hearing this, I snapped, and said, “No one who works in TV gets to ask that question. You know where the time comes from.” She knew, because she worked in the industry that had been burning off the lion’s share of our free time for the last fifty years.

Imagine treating the free time of the world’s educated citizenry as an aggregate, a kind of cognitive surplus. How big would that surplus be? To figure it out, we need a unit of measurement, so let’s start with Wikipedia. Suppose we consider the total amount of time people have spent on it as a kind of unit—every edit made to every article, and every argument about those edits, for every language that Wikipedia exists in. That would represent something like one hundred million hours of human thought, back when I was talking to the TV producer. (Martin Wattenberg, an IBM researcher who has spent time studying Wikipedia, helped me arrive at that figure. It’s a back-of-the-envelope calculation, but it’s the right order of magnitude.) One hundred million hours of cumulative thought is obviously a lot. How much is it, though, compared to the amount of time we spend watching television?

Americans watch roughly two hundred billion hours of TV every year. That represents about two thousand Wikipedias’ projects’ worth of free time annually. Even tiny subsets of this time are enormous: we spend roughly a hundred million hours every weekend just watching commercials. This is a pretty big surplus. People who ask “Where do they find the time?” about those who work on Wikipedia don’t understand how tiny that entire project is, relative to the aggregate free time we all possess. One thing that makes the current age remarkable is that we can now treat free time as a general social asset that can be harnessed for large, communally created projects, rather than as a set of individual minutes to be whiled away one person at a time.

Society never really knows what to do with any surplus at first. (That’s what makes it a surplus.) For most of the time when we’ve had a truly large-scale surplus in free time—billions and then trillions of hours a year—we’ve spent it consuming television, because we judged that use of time to be better than the available alternatives. Sure, we could have played outdoors or read books or made music with our friends, but we mostly didn’t, because the thresholds to those activities were too high, compared to just sitting and watching. Life in the developed world includes a lot of passive participation: at work we’re office drones, at home we’re couch potatoes. The pattern is easy enough to explain by assuming we’ve wanted to be passive participants more than we wanted other things. This story has been, in the last several decades, pretty plausible; a lot of evidence certainly supported this view, and not a lot contradicted it.

But now, for the first time in the history of television, some cohorts of young people are watching TV less than their elders. Several population studies—of high school students, broadband users, YouTube users—have noticed the change, and their basic observation is always the same: young populations with access to fast, interactive media are shifting their behavior away from media that presupposes pure consumption. Even when they watch video online, seemingly a pure analog to TV, they have opportunities to comment on the material, to share it with their friends, to label, rate, or rank it, and of course, to discuss it with other viewers around the world. As Dan Hill noted in a much-cited online essay, “Why Lost Is Genuinely New Media,” the viewers of that show weren’t just viewers—they collaboratively created a compendium of material related to that show called (what else?) Lostpedia. Even when they are engaged in watching TV, in other words, many members of the networked population are engaged with one another, and this engagement correlates with behaviors other than passive consumption.

The choices leading to reduced TV consumption are at once tiny and enormous. The tiny choices are individual; someone simply decides to spend the next hour talking to friends or playing a game or creating something instead of just watching. The enormous choices are collective ones, an accumulation of those tiny choices by the millions; the cumulative shift toward participation across a whole population enables the creation of a Wikipedia. The television industry has been shocked to see alternative uses of free time, especially among young people, because the idea that watching TV was the best use of free time, as ratified by the viewers, has been such a stable feature of society for so long. (Charlie Leadbeater, the U.K. scholar of collaborative work, reports that a TV executive recently told him that participatory behavior among the young will go away when they grow up, because work will so exhaust them that they won’t be able to do anything with their free time but “slump in front of the TV.”) Believing that the past stability of this behavior meant it would be a stable behavior in the future as well turned out to be a mistake—and not just any mistake, but a particular kind of mistake.





MILKSHAKE MISTAKES


When McDonald’s wanted to improve sales of its milkshakes, it hired researchers to figure out what characteristics its customers cared about. Should the shakes be thicker? Sweeter? Colder? Almost all of the researchers focused on the product. But one of them, Gerald Berstell, chose to ignore the shakes themselves and study the customers instead. He sat in a McDonald’s for eighteen hours one day, observing who bought milkshakes and at what time. One surprising discovery was that many milkshakes were purchased early in the day—odd, as consuming a shake at eight A.M. plainly doesn’t fit the bacon-and-eggs model of breakfast. Berstell also garnered three other behavioral clues from the morning milkshake crowd: the buyers were always alone, they rarely bought anything besides a shake, and they never consumed the shakes in the store.

The breakfast-shake drinkers were clearly commuters, intending to drink them while driving to work. This behavior was readily apparent, but the other researchers had missed it because it didn’t fit the normal way of thinking about either milkshakes or breakfast. As Berstell and his colleagues noted in “Finding the Right Job for Your Product,” their essay in the Harvard Business Review, the key to understanding what was going on was to stop viewing the product in isolation and to give up traditional notions of the morning meal. Berstell instead focused on a single, simple question: “What job is a customer hiring that milkshake to do at eight A.M.?”

If you want to eat while you are driving, you need something you can eat with one hand. It shouldn’t be too hot, too messy, or too greasy. It should also be moderately tasty, and take a while to finish. Not one conventional breakfast item fits that bill, and so without regard for the sacred traditions of the morning meal, those customers were hiring the milkshake to do the job they needed done.

All the researchers except Berstell missed this fact, because they made two kinds of mistakes, things we might call “milkshake mistakes.” The first was to concentrate mainly on the product and assume that everything important about it was somehow implicit in its attributes, without regard to what role the customers wanted it to play—the job they were hiring the milkshake for.

The second mistake was to adopt a narrow view of the type of food people have always eaten in the morning, as if all habits were deeply rooted traditions instead of accumulated accidents. Neither the shake itself nor the history of breakfast mattered as much as customers needing food to do a nontraditional job—serve as sustenance and amusement for their morning commute—for which they hired the milkshake.

We have the same problems thinking about media. When we talk about the effects of the web or text messages, it’s easy to make a milkshake mistake and focus on the tools themselves. (I speak from personal experience—much of the work I did in the 1990s focused obsessively on the capabilities of computers and the internet, with too little regard for the way human desires shaped them.)

The social uses of our new media tools have been a big surprise, in part because the possibility of these uses wasn’t implicit in the tools themselves. A whole generation had grown up with personal technology, from the portable radio through the PC, so it was natural to expect them to put the new media tools to personal use as well. But the use of a social technology is much less determined by the tool itself; when we use a network, the most important asset we get is access to one another. We want to be connected to one another, a desire that the social surrogate of television deflects, but one that our use of social media actually engages.

It’s also easy to assume that the world as it currently exists represents some sort of ideal expression of society, and that all deviations from this sacred tradition are both shocking and bad. Although the internet is already forty years old, and the web half that age, some people are still astonished that individual members of society, previously happy to spend most of their free time consuming, would start voluntarily making and sharing things. This making-and-sharing is certainly a surprise compared to the previous behavior. But pure consumption of media was never a sacred tradition; it was just a set of accumulated accidents, accidents that are being undone as people start hiring new communications tools to do jobs older media simply can’t do.

To pick one example, a service called Ushahidi was developed to help citizens track outbreaks of ethnic violence in Kenya. In December 2007 a disputed election pitted supporters and opponents of President Mwai Kibaki against one another. Ory Okolloh, a Kenyan political activist, blogged about the violence when the Kenyan government banned the mainstream media from reporting on it. She then asked her readers to e-mail or post comments about the violence they were witnessing on her blog. The method proved so popular that her blog, Kenyan Pundit, became a critical source of first-person reporting. The observations kept flooding in, and within a couple of days Okolloh could no longer keep up with it. She imagined a service, which she dubbed Ushahidi (Swahili for “witness” or “testimony”), that would automatically aggregate citizen reporting (she had been doing it by hand), with the added value of locating the reported attacks on a map in near-real time. She floated the idea on her blog, which attracted the attention of the programmers Erik Hersman and David Kobia. The three of them got on a conference call and hashed out how such a service might work, and within three days, the first version of Ushahidi went live.

People normally find out about the kind of violence that took place after the Kenyan election only if it happens nearby. There is no public source where people can go to locate trouble spots, either to understand what’s going on or to offer help. We’ve typically relied on governments or professional media to inform us about collective violence, but in Kenya in early 2008 the professionals weren’t covering it, out of partisan fervor or censorship, and the government had no incentive to report anything.

Ushahidi was developed to aggregate this available but dispersed knowledge, to collectively weave together all the piecemeal awareness among individual witnesses into a nationwide picture. Even if the information the public wanted existed someplace in the government, Ushahidi was animated by the idea that rebuilding it from scratch, with citizen input, was easier than trying to get it from the authorities. The project started as a website, but the Ushahidi developers quickly added the ability to submit information via text message from mobile phones, and that’s when the reports really poured in. Several months after Ushahidi launched, Harvard’s Kennedy School of Government did an analysis that compared the site’s data to that of the mainstream media and concluded that Ushahidi had been better at reporting acts of violence as they started, as opposed to after the fact, better at reporting acts of nonfatal violence, which are often a precursor to deaths, and better at reporting over a wide geographical area, including rural districts.

All of this information was useful—governments the world over act less violently toward their citizens when they are being observed, and Kenyan NGOs used the data to target humanitarian responses. But that was just the beginning. Realizing the site’s potential, the founders decided to turn Ushahidi into a platform so that anyone could set up their own service for collecting and mapping information reported via text message. The idea of making it easy to tap various kinds of collective knowledge has spread from the original Kenyan context. Since its debut in early 2008, Ushahidi has been used to track similar acts of violence in the Democratic Republic of Congo, to monitor polling places and prevent voter fraud in India and Mexico, to record supplies of vital medicines in several East African countries, and to locate the injured after the Haitian and Chilean earthquakes.

A handful of people, working with cheap tools and little time or money to spare, managed to carve out enough collective goodwill from the community to create a resource that no one could have imagined even five years ago. Like all good stories, the story of Ushahidi holds several different lessons: People want to do something to make the world a better place. They will help when they are invited to. Access to cheap, flexible tools removes many of the barriers to trying new things. You don’t need fancy computers to harness cognitive surplus; simple phones are enough. But one of the most important lessons is this: once you’ve figured out how to tap the surplus in a way that people care about, others can replicate your technique, over and over, around the world.

Ushahidi.com, designed to help a distressed population in a difficult time, is remarkable, but not all new communications tools are so civically engaged; in fact, most aren’t. For every remarkable project like Ushahidi or Wikipedia, there are countless pieces of throwaway work, created with little effort, and targeting no positive effect greater than crude humor. The canonical example at present is the lolcat, a cute picture of a cat that is made even cuter by the addition of a cute caption, the ideal effect of “cat plus caption” being to make the viewer laugh out loud (thus putting the lol in lolcat). The largest collection of such images is a website called ICanHasCheezburger.com, named after its inaugural image: a gray cat, mouth open, staring maniacally, bearing the caption “I Can Has Cheezburger?” (Lolcats are notoriously poor spellers.) ICanHasCheezburger.com has more than three thousand lolcat images—“i have bad day,” “im steelin som ur foodz k thx bai,” “BANDIT CAT JUST ATED UR BURRITOZ”—each of which garners dozens or hundreds of comments, also written in lolspeak. We are far from Ushahidi now.

Let’s nominate the process of making a lolcat as the stupidest possible creative act. (There are other candidates, of course, but lolcats will do as a general case.) Formed quickly and with a minimum of craft, the average lolcat image has the social value of a whoopee cushion and the cultural life span of a mayfly. Yet anyone seeing a lolcat gets a second, related message: You can play this game too. Precisely because lolcats are so transparently created, anyone can add a dopey caption to an image of a cute cat (or dog, or hamster, or walrus—Cheezburger is an equal-opportunity time waster) and then share that creation with the world.

Lolcat images, dumb as they are, have internally consistent rules, everything from “Captions should be spelled phonetically” to “The lettering should use a sans-serif font.” Even at the stipulated depths of stupidity, in other words, there are ways to do a lolcat wrong, which means there are ways to do it right, which means there is some metric of quality, even if limited. However little the world needs the next lolcat, the message You can play this game too is a change from what we’re used to in the media landscape. The stupidest possible creative act is still a creative act.

Much of the objection to lolcats focuses on how stupid they are; even a funny lolcat doesn’t amount to much. On the spectrum of creative work, the difference between the mediocre and the good is vast. Mediocrity is, however, still on the spectrum; you can move from mediocre to good in increments. The real gap is between doing nothing and doing something, and someone making lolcats has bridged that gap.

As long as the assumed purpose of media is to allow ordinary people to consume professionally created material, the proliferation of amateur-created stuff will seem incomprehensible. What amateurs do is so, well, unprofessional—lolcats as a kind of low-grade substitute for the Cartoon Network. But what if, all this time, providing professional content isn’t the only job we’ve been hiring media to do? What if we’ve also been hiring it to make us feel connected, engaged, or just less lonely? What if we’ve always wanted to produce as well as consume, but no one offered us that opportunity? The pleasure in You can play this game too isn’t just in the making, it’s also in the sharing. The phrase “user-generated content,” the current label for creative acts by amateurs, really describes not just personal but also social acts. Lolcats aren’t just user-generated, they are user-shared. The sharing, in fact, is what makes the making fun—no one would create a lolcat to keep for themselves.

The atomization of social life in the twentieth century left us so far removed from participatory culture that when it came back, we needed the phrase “participatory culture” to describe it. Before the twentieth century, we didn’t really have a phrase for participatory culture; in fact, it would have been something of a tautology. A significant chunk of culture was participatory—local gatherings, events, and performances—because where else could culture come from? The simple act of creating something with others in mind and then sharing it with them represents, at the very least, an echo of that older model of culture, now in technological raiment. Once you accept the idea that we actually like making and sharing things, however dopey in content or poor in execution, and that making one another laugh is a different kind of activity from being made to laugh by people paid to make us laugh, then in some ways the Cartoon Network is a low-grade substitute for lolcats....
[/lyrics]


DETAILS

TITLE: Cognitive Surplus _Creativity and Generosity in a Connected Age

Author: Clay Shirky

Language: English

ISBN: 9781594202537

Format: TXT, EPUB, MOBI

DRM-free: Without Any Restriction

SKU: 2surplusshirky2



SHARE LINKS

Mediafire (recommended)
Rapidshare
Megaupload

No comments:

Post a Comment