Free Culture Page 4
For $25, anyone could make pictures. The camera came preloaded with film, and when it had been used, the camera was returned to an Eastman factory, where the film was developed. Over time, of course, the cost of the camera and the ease with which it could be used both improved. Roll film thus became the basis for the explosive growth of popular photography. Eastman's camera first went on sale in 1888; one year later, Kodak was printing more than six thousand negatives a day. From 1888 through 1909, while industrial production was rising by 4.7 percent, photographic equipment and material sales increased by 11 percent.[3] Eastman Kodak's sales during the same period experienced an average annual increase of over 17 percent.[4]
The real significance of Eastman's invention, however, was not economic. It was social. Professional photography gave individuals a glimpse of places they would never otherwise see. Amateur photography gave them the ability to record their own lives in a way they had never been able to do before. As author Brian Coe notes, “For the first time the snapshot album provided the man on the street with a permanent record of his family and its activities. . . For the first time in history there exists an authentic visual record of the appearance and activities of the common man made without [literary] interpretation or bias.”[5]
In this way, the Kodak camera and film were technologies of expression. The pencil or paintbrush was also a technology of expression, of course. But it took years of training before they could be deployed by amateurs in any useful or effective way. With the Kodak, expression was possible much sooner and more simply. The barrier to expression was lowered. Snobs would sneer at its “quality”; professionals would discount it as irrelevant. But watch a child study how best to frame a picture and you get a sense of the experience of creativity that the Kodak enabled. Democratic tools gave ordinary people a way to express themselves more easily than any tools could have before.
What was required for this technology to flourish? Obviously, Eastman's genius was an important part. But also important was the legal environment within which Eastman's invention grew. For early in the history of photography, there was a series of judicial decisions that could well have changed the course of photography substantially. Courts were asked whether the photographer, amateur or professional, required permission before he could capture and print whatever image he wanted. Their answer was no.[6]
The arguments in favor of requiring permission will sound surprisingly familiar. The photographer was “taking” something from the person or building whose photograph he shot—pirating something of value. Some even thought he was taking the target's soul. Just as Disney was not free to take the pencils that his animators used to draw Mickey, so, too, should these photographers not be free to take images that they thought valuable.
On the other side was an argument that should be familiar, as well. Sure, there may be something of value being used. But citizens should have the right to capture at least those images that stand in public view. (Louis Brandeis, who would become a Supreme Court Justice, thought the rule should be different for images from private spaces.[7]) It may be that this means that the photographer gets something for nothing. Just as Disney could take inspiration from Steamboat Bill, Jr. or the Brothers Grimm, the photographer should be free to capture an image without compensating the source.
Fortunately for Mr. Eastman, and for photography in general, these early decisions went in favor of the pirates. In general, no permission would be required before an image could be captured and shared with others. Instead, permission was presumed. Freedom was the default. (The law would eventually craft an exception for famous people: commercial photographers who snap pictures of famous people for commercial purposes have more restrictions than the rest of us. But in the ordinary case, the image can be captured without clearing the rights to do the capturing.[8])
We can only speculate about how photography would have developed had the law gone the other way. If the presumption had been against the photographer, then the photographer would have had to demonstrate permission. Perhaps Eastman Kodak would have had to demonstrate permission, too, before it developed the film upon which images were captured. After all, if permission were not granted, then Eastman Kodak would be benefiting from the “theft” committed by the photographer. Just as Napster benefited from the copyright infringements committed by Napster users, Kodak would be benefiting from the “image-right” infringement of its photographers. We could imagine the law then requiring that some form of permission be demonstrated before a company developed pictures. We could imagine a system developing to demonstrate that permission.
But though we could imagine this system of permission, it would be very hard to see how photography could have flourished as it did if the requirement for permission had been built into the rules that govern it. Photography would have existed. It would have grown in importance over time. Professionals would have continued to use the technology as they did—since professionals could have more easily borne the burdens of the permission system. But the spread of photography to ordinary people would not have occurred. Nothing like that growth would have been realized. And certainly, nothing like that growth in a democratic technology of expression would have been realized.
If you drive through San Francisco's Presidio, you might see two gaudy yellow school buses painted over with colorful and striking images, and the logo “Just Think!” in place of the name of a school. But there's little that's “just” cerebral in the projects that these busses enable. These buses are filled with technologies that teach kids to tinker with film. Not the film of Eastman. Not even the film of your VCR. Rather the “film” of digital cameras. Just Think! is a project that enables kids to make films, as a way to understand and critique the filmed culture that they find all around them. Each year, these busses travel to more than thirty schools and enable three hundred to five hundred children to learn something about media by doing something with media. By doing, they think. By tinkering, they learn.
These buses are not cheap, but the technology they carry is increasingly so. The cost of a high-quality digital video system has fallen dramatically. As one analyst puts it, “Five years ago, a good real-time digital video editing system cost $25,000. Today you can get professional quality for $595.”[9] These buses are filled with technology that would have cost hundreds of thousands just ten years ago. And it is now feasible to imagine not just buses like this, but classrooms across the country where kids are learning more and more of something teachers call “media literacy.”
“Media literacy,” as Dave Yanofsky, the executive director of Just Think!, puts it, “is the ability. . . to understand, analyze, and deconstruct media images. Its aim is to make [kids] literate about the way media works, the way it's constructed, the way it's delivered, and the way people access it.”
This may seem like an odd way to think about “literacy.” For most people, literacy is about reading and writing. Faulkner and Hemingway and noticing split infinitives are the things that “literate” people know about.
Maybe. But in a world where children see on average 390 hours of television commercials per year, or between 20,000 and 45,000 commercials generally,[10] it is increasingly important to understand the “grammar” of media. For just as there is a grammar for the written word, so, too, is there one for media. And just as kids learn how to write by writing lots of terrible prose, kids learn how to write media by constructing lots of (at least at first) terrible media.
A growing field of academics and activists sees this form of literacy as crucial to the next generation of culture. For though anyone who has written understands how difficult writing is—how difficult it is to sequence the story, to keep a reader's attention, to craft language to be understandable—few of us have any real sense of how difficult media is. Or more fundamentally, few of us have a sense of how media works, how it holds an audience or leads it through a story, how it triggers emotion or builds suspense.
It took filmmaking a generation before it could d
o these things well. But even then, the knowledge was in the filming, not in writing about the film. The skill came from experiencing the making of a film, not from reading a book about it. One learns to write by writing and then reflecting upon what one has written. One learns to write with images by making them and then reflecting upon what one has created.
This grammar has changed as media has changed. When it was just film, as Elizabeth Daley, executive director of the University of Southern California's Annenberg Center for Communication and dean of the USC School of Cinema-Television, explained to me, the grammar was about “the placement of objects, color,. . . rhythm, pacing, and texture.”[11] But as computers open up an interactive space where a story is “played” as well as experienced, that grammar changes. The simple control of narrative is lost, and so other techniques are necessary. Author Michael Crichton had mastered the narrative of science fiction. But when he tried to design a computer game based on one of his works, it was a new craft he had to learn. How to lead people through a game without their feeling they have been led was not obvious, even to a wildly successful author.[12]
This skill is precisely the craft a filmmaker learns. As Daley describes, “people are very surprised about how they are led through a film. [I]t is perfectly constructed to keep you from seeing it, so you have no idea. If a filmmaker succeeds you do not know how you were led.” If you know you were led through a film, the film has failed.
Yet the push for an expanded literacy—one that goes beyond text to include audio and visual elements—is not about making better film directors. The aim is not to improve the profession of filmmaking at all. Instead, as Daley explained,
From my perspective, probably the most important digital divide is not access to a box. It's the ability to be empowered with the language that that box works in. Otherwise only a very few people can write with this language, and all the rest of us are reduced to being read-only.
“Read-only.” Passive recipients of culture produced elsewhere. Couch potatoes. Consumers. This is the world of media from the twentieth century.
The twenty-first century could be different. This is the crucial point: It could be both read and write. Or at least reading and better understanding the craft of writing. Or best, reading and understanding the tools that enable the writing to lead or mislead. The aim of any literacy, and this literacy in particular, is to “empower people to choose the appropriate language for what they need to create or express.”[13] It is to enable students “to communicate in the language of the twenty-first century.”[14]
As with any language, this language comes more easily to some than to others. It doesn't necessarily come more easily to those who excel in written language. Daley and Stephanie Barish, director of the Institute for Multimedia Literacy at the Annenberg Center, describe one particularly poignant example of a project they ran in a high school. The high school was a very poor inner-city Los Angeles school. In all the traditional measures of success, this school was a failure. But Daley and Barish ran a program that gave kids an opportunity to use film to express meaning about something the students know something about—gun violence.
The class was held on Friday afternoons, and it created a relatively new problem for the school. While the challenge in most classes was getting the kids to come, the challenge in this class was keeping them away. The “kids were showing up at 6 A.M. and leaving at 5 at night,” said Barish. They were working harder than in any other class to do what education should be about—learning how to express themselves.
Using whatever “free web stuff they could find,” and relatively simple tools to enable the kids to mix “image, sound, and text,” Barish said this class produced a series of projects that showed something about gun violence that few would otherwise understand. This was an issue close to the lives of these students. The project “gave them a tool and empowered them to be able to both understand it and talk about it,” Barish explained. That tool succeeded in creating expression—far more successfully and powerfully than could have been created using only text. “If you had said to these students, 'you have to do it in text,' they would've just thrown their hands up and gone and done something else,” Barish described, in part, no doubt, because expressing themselves in text is not something these students can do well. Yet neither is text a form in which these ideas can be expressed well. The power of this message depended upon its connection to this form of expression.
“But isn't education about teaching kids to write?” I asked. In part, of course, it is. But why are we teaching kids to write? Education, Daley explained, is about giving students a way of “constructing meaning.” To say that that means just writing is like saying teaching writing is only about teaching kids how to spell. Text is one part—and increasingly, not the most powerful part—of constructing meaning. As Daley explained in the most moving part of our interview,
What you want is to give these students ways of constructing meaning. If all you give them is text, they're not going to do it. Because they can't. You know, you've got Johnny who can look at a video, he can play a video game, he can do graffiti all over your walls, he can take your car apart, and he can do all sorts of other things. He just can't read your text. So Johnny comes to school and you say, “Johnny, you're illiterate. Nothing you can do matters.” Well, Johnny then has two choices: He can dismiss you or he [can] dismiss himself. If his ego is healthy at all, he's going to dismiss you. [But i]nstead, if you say, “Well, with all these things that you can do, let's talk about this issue. Play for me music that you think reflects that, or show me images that you think reflect that, or draw for me something that reflects that.” Not by giving a kid a video camera and. . . saying, “Let's go have fun with the video camera and make a little movie.” But instead, really help you take these elements that you understand, that are your language, and construct meaning about the topic. . .
That empowers enormously. And then what happens, of course, is eventually, as it has happened in all these classes, they bump up against the fact, “I need to explain this and I really need to write something.” And as one of the teachers told Stephanie, they would rewrite a paragraph 5, 6, 7, 8 times, till they got it right.
Because they needed to. There was a reason for doing it. They needed to say something, as opposed to just jumping through your hoops. They actually needed to use a language that they didn't speak very well. But they had come to understand that they had a lot of power with this language.“
When two planes crashed into the World Trade Center, another into the Pentagon, and a fourth into a Pennsylvania field, all media around the world shifted to this news. Every moment of just about every day for that week, and for weeks after, television in particular, and media generally, retold the story of the events we had just witnessed. The telling was a retelling, because we had seen the events that were described. The genius of this awful act of terrorism was that the delayed second attack was perfectly timed to assure that the whole world would be watching.
These retellings had an increasingly familiar feel. There was music scored for the intermissions, and fancy graphics that flashed across the screen. There was a formula to interviews. There was “balance,” and seriousness. This was news choreographed in the way we have increasingly come to expect it, “news as entertainment,” even if the entertainment is tragedy.
But in addition to this produced news about the “tragedy of September 11,” those of us tied to the Internet came to see a very different production as well. The Internet was filled with accounts of the same events. Yet these Internet accounts had a very different flavor. Some people constructed photo pages that captured images from around the world and presented them as slide shows with text. Some offered open letters. There were sound recordings. There was anger and frustration. There were attempts to provide context. There was, in short, an extraordinary worldwide barn raising, in the sense Mike Godwin uses the term in his book Cyber Rights, around a news event that had captured the attention of the world.
There was ABC and CBS, but there was also the Internet.
I don't mean simply to praise the Internet—though I do think the people who supported this form of speech should be praised. I mean instead to point to a significance in this form of speech. For like a Kodak, the Internet enables people to capture images. And like in a movie by a student on the “Just Think!” bus, the visual images could be mixed with sound or text.
But unlike any technology for simply capturing images, the Internet allows these creations to be shared with an extraordinary number of people, practically instantaneously. This is something new in our tradition—not just that culture can be captured mechanically, and obviously not just that events are commented upon critically, but that this mix of captured images, sound, and commentary can be widely spread practically instantaneously.
September 11 was not an aberration. It was a beginning. Around the same time, a form of communication that has grown dramatically was just beginning to come into public consciousness: the Web-log, or blog. The blog is a kind of public diary, and within some cultures, such as in Japan, it functions very much like a diary. In those cultures, it records private facts in a public way—it's a kind of electronic Jerry Springer, available anywhere in the world.
But in the United States, blogs have taken on a very different character. There are some who use the space simply to talk about their private life. But there are many who use the space to engage in public discourse. Discussing matters of public import, criticizing others who are mistaken in their views, criticizing politicians about the decisions they make, offering solutions to problems we all see: blogs create the sense of a virtual public meeting, but one in which we don't all hope to be there at the same time and in which conversations are not necessarily linked. The best of the blog entries are relatively short; they point directly to words used by others, criticizing with or adding to them. They are arguably the most important form of unchoreographed public discourse that we have.