Labels

Economics (42) People (16) Politics (29) Science (24) Various (37)

Wednesday, May 11, 2011

The Class That Built Apps, and Fortunes




ALL right, class, here’s your homework assignment: Devise an app. Get people to use it. Repeat.

That was the task for some Stanford students in the fall of 2007, in what became known here as the “Facebook Class.”

No one expected what happened next.

The students ended up getting millions of users for free apps that they designed to run on Facebook. And, as advertising rolled in, some of those students started making far more money than their professors.

Almost overnight, the Facebook Class fired up the careers and fortunes of more than two dozen students and teachers here. It also helped to pioneer a new model of entrepreneurship that has upturned the tech establishment: the lean start-up.

“Everything was happening so fast,” recalls Joachim De Lombaert, now 23. His team’s app netted $3,000 a day and morphed into a company that later sold for a six-figure sum.

“I almost didn’t realize what it all meant,” he says.

Neither did many of his classmates. Back then, Facebook apps were a novelty. The iPhone had just arrived, and the first Android phone was a year off.

But by teaching students to build no-frills apps, distribute them quickly and worry about perfecting them later, the Facebook Class stumbled upon what has become standard operating procedure for a new generation of entrepreneurs and investors in Silicon Valley and beyond. For many, the long trek from idea to product to company has turned into a sprint.

Start-ups once required a lot of money, time and people. But over the past decade, free, open-source software and “cloud” services have brought costs down, while ad networks help bring in revenue quickly.

The app phenomenon has accentuated the trend and helped unleash what some call a new wave of technology innovation — and what others call a bubble.

Early on, the Facebook Class became a microcosm of Silicon Valley. Working in teams of three, the 75 students created apps that collectively had 16 million users in just 10 weeks. Many of those apps were sort of silly: Mr. De Lombaert’s, for example, allowed users to send “hotness” points to Facebook friends. Yet during the term, the apps, free for users, generated roughly $1 million in advertising revenue.

Such successes helped inspire entrepreneurs to ditch business plans and work on apps. Not all succeeded, but those that did helped to fuel the expansion of Facebook, which now has nearly 700 million users.

Venture capitalists also began rethinking their approach. Some created investment funds tailored to the new, bare-bones start-ups.

“A lot of the concepts and ideas that came out of the class influenced the structure of the fund that I am working on now,” says Dave McClure, one of the class instructors and founder of 500 Startups, which invests in lean start-ups. “The class was the realization that this stuff really works.”

Nearly four years later, many of the students have learned that building a business is a lot harder than creating an app — even an app worthy of an A+.

“Starting a company is definitely more work,” says Edward Baker, who was Mr. De Lombaert’s partner in the class and later in business. The two have founded Friend.ly, a social networking start-up.

Still, many students were richly rewarded. Some turned their homework into companies. A few have since sold those businesses to the likes of Zynga. Others joined hot start-ups like RockYou, a gaming site that at the time was among the most successful Facebook apps.

The Facebook Class changed Mr. De Lombaert’s life. His team’s app, Send Hotness, brought in more users and more money faster than any other in the class. And its success attracted the attention of venture capitalists.

“The class, more than anything, set the tone for us to try to start something big,” says Mr. Baker, 32, Friend.ly’s C.E.O.

When the Send Hotness app began to take off, Mr. Baker encouraged Mr. De Lombaert to treat himself to a new car. Mr. De Lombaert settled for a laptop. (He also put some money aside to help to pay his Stanford tuition.) They eventually sold the app to a dating Web site.

Facebook did not actively participate in the Stanford class. But some of its engineers attended sessions, and it benefited from the success of the students’ apps. “It really felt like an incubator,” says David Fetterman, a Facebook engineer who helped develop the applications platform.

The startling success of some of the class’s projects got Silicon Valley buzzing. The final session, held in an auditorium in December 2007, was attended by more than 500 people, including many investors.

“The Facebook platform was taking off, and there was this feeling of a gold rush,” said Mike Maples Jr., an investor who attended some of the classes and ended up backing one of the start-ups.

THE Facebook Class was the brainchild of B. J. Fogg, who runs the Persuasive Technology Lab at Stanford. An energetic academic and an innovation guru, he focuses on how to harness technology and human psychology to influence people’s behavior.

Mr. Fogg thought that the Facebook platform would be a good way to test some of his theories. Creating a new model of entrepreneurship was far from his mind.

At first, university administrators pushed back. “Facebook was not taken so seriously in academic circles back then,” Mr. Fogg recalls.

But there was no hesitation among students — from undergraduates in computer science to M.B.A. candidates — who were spending much of their lives immersed in Facebook.

From the start, many approached the class from a business angle. Mr. Baker, for instance, was a graduate business student but lacked technical skills, so he spent his first week interviewing engineers. “I wanted a technical co-founder,” he says.

He settled on Mr. De Lombaert, and the two, along with a third student, Alex Onsager, created Send Hotness. It let users send points to friends they considered “hot” and to compare “hotness” rankings.

Soon they found themselves in a proverbial “the dog ate my homework” situation. Three days before a presentation was due, Mr. De Lombaert accidentally deleted the computer code he was tinkering with. “We kind of freaked out,” he recalls.

Rebuilding the app would take too long. So, working around the clock over a weekend, they built another version, with a more rudimentary algorithm.

The stripped-down app took off. In five weeks, five million people signed up. When the team began placing ads on the app, the money poured in.

They had stumbled upon one of the themes of the class: make things simple, and perfect them later.

“The students did an amazing job of getting stuff into the market very quickly,” says Michael Dearing, a consulting associate professor at the Institute of Design at Stanford, who now teaches a class based on similar, rapid prototyping ideas. “It was a huge success.”

DAN GREENBERG was sitting at the kitchen table one night when he and another teaching assistant decided to get into the app game. Mr. Greenberg, a graduate student who had done research for Mr. Fogg, hadn’t planned to get app-happy. But the students’ success whetted his appetite.

Four weeks into the quarter, he and his colleague, Rob Fan, set out to create an app that would let Facebook users send “hugs” to one another.

It took them all of five hours.

The app took off. So they moved on to apps for “kisses,” “pillow fights” and other digital interactions — 70 in all.

Their apps caught on with millions of people and were soon bringing in nearly $100,000 a month in ads. After the class ended, the two started a company, 750 Industries, named after the 750 Pub at Stanford where Mr. Greenberg and Mr. Fan where drinking when they decided to become business partners.

But juggling the business and schoolwork was too much for Mr. Greenberg, then 22. So he called his father.

“I said, ‘Dad, it is 10 p.m., and I’ve got so much stuff to do,’ ” Mr. Greenberg recalls. “ ‘We’re running this business, and I’ve got customers, and we are earning money, and we got financing and we have people to hire. But I have to write a paper tonight, and I just don’t have time for it.’ ”

 His father advised him to pull a Mark Zuckerberg and drop out. The next day, Mr. Greenberg did just that.

Now 25, he works out of a glass-walled corner office in San Francisco. He is C.E.O. of his company, now called Sharethrough, which uses social media to distribute videos across the Web for companies. It employs 30 people and has raised about $6 million in venture capital. “It feels like a fairy tale when you look back on it,” he says of the class.

He has upgraded his lifestyle somewhat, but still doesn’t own a car. “I have a Vespa and skateboard,” he says.

“LOVE CHILD.” It sounds like an unlikely name for an app. But Johnny Hwin and his Stanford class team set out to build an app of that name, one that would let two users create and raise a virtual child. It never took off.

“We were overly ambitious,” Mr. Hwin says.

Seeing his classmates strike gold with simpler ideas proved to be a valuable lesson. In 2009, he began working on Damntheradio.com, a Facebook marketing tool that helped bands and musicians connect with fans online.

It opened last June and was acquired in January by FanBridge, where Mr. Hwin is now a vice president, for a few million dollars, he says.

Mr. Hwin, who is 26 and also a musician, now lives in a loft space in the Mission neighborhood in San Francisco. He uses his place as a kind of salon for late-night art shows and concerts.

“With Love Child, we wanted it to be perfect,” he says. With Damntheradio, he found his first clients by showing mockups of the product. “We were able to launch within weeks,” he says.

Another class member, Robert Cezar Matei, says he had only modest success with his projects. One, he said, allowed users to send “cheesy pickup lines” to friends; another encouraged people to reveal something about themselves. After graduating from Stanford, he wanted to earn some money to go traveling, but instead of getting a job, he decided to write Facebook apps. “I’d seen my peers being so successful with apps,” he says. “If they could do it, I could do it.”

After a few false starts, he created an app that let people send points and “kisses” to friends. It struggled until Mr. Matei, who speaks several languages, translated the app. The next day, traffic jumped fivefold. He added games, and employees, and the app became one of the most popular Facebook programs in Europe. In late 2009, he sold to Zynga for an undisclosed sum.

Also in the class was Joshua Reeves, who built an app that created animations that Facebook members would send to one another as birthday greetings or other messages. It made enough money for him to quit his job in 2008 to start Buzzeo, a content management system for Facebook. A year ago, Buzzeo was acquired by Context Optional, where Mr. Reeves, 28, is now a vice president. Last week, Efficient Frontier, a digital marketing company, acquired Context Optional for an undisclosed sum.

ONE recent afternoon at the headquarters of Friend.ly in Mountain View, Calif., 10 engineers worked away as two employees turned their attention to a companywide project: a 24,000-piece jigsaw puzzle.

For much of the past year, Friend.ly has worked on developing its service, a social network for meeting new people, without much success. A few weeks ago, the work appeared to pay off: traffic took off, growing to nearly five million monthly users.

Mr. Baker says the Facebook platform is a magnet for young developers, even though the kind of simple apps that were the focus of his Stanford class now face bigger hurdles. Facebook has made it harder to develop big-hit apps by controlling how apps spread virally.

But Mr. Fogg, says that for those who were at the right place at the right time — in late 2007 — things were different. “There was a period of time when you could walk in and collect gold,” he says. “It was landscape that was ready to be harvested.”

"BBC Horizon" - What Is Reality?

Friday, March 18, 2011

Self-Service: The Delicate Dance of Online Bragging


A few years ago, I belonged to an informal group of freelance writers and editors who would assemble regularly to drink and talk shop. One evening, someone in our rotating cast brought along a new member, who began regaling us with tales of her editorial triumphs and financial success. Apparently she never got the memo that our gatherings were outlets for complaint and commiseration. As the evening wore on, the rest of us adopted a logical, if immature, course of action: We all pretended to go home and then reconvened at another bar without her. In the parlance of our times, you might say that we collectively unfollowed her. If this episode had actually taken place in today’s world of online social networking, however, we probably wouldn’t have batted an eye. The self-aggrandizement that offended the group is standard fare in my Twitter feed — my own posts too often included. (BTW, I’ll be appearing on TV this week.) But far from clearing out the virtual bar, expressions of vanity online are usually rewarded with a cascade of back-patting: a virtual thumbs-up, a hearty “congrats!,” a “proud-to-know-you” retweet. Social networking sites have inverted the rules of privacy and etiquette, and no cultural norm is tossed aside more often on the Web than plain old modesty. This raises an existential question: When you celebrate yourself online, are you a willing participant in a brave new social future, or are you just being an ass? Don’t panic; it’s the former — as long as you strike a balance.
For sure, posting anything online is an act of either inherent immodesty or existence affirmation, depending on your outlook: “I’m alive! I’m doing (or discovering) things! People out there should give a damn!” Bragging is practically coded into social media’s DNA, and there’s nothing necessarily wrong with that. We show off by noting the interestingness of our companions, the solidity of our relationships, the fabulousness of our meals. What are an excess of Facebook friends and LinkedIn connections for if not low-intensity name-dropping? “Look how many fascinating people are willing to connect with me.”
An entire taxonomy of status types has evolved for sharing some bit of good fortune. There’s one for every online persona. The straightforward celebration: “W00t!! I’ve been named to Bigtime magazine’s 100 most influential!” The ironic frame: “Shameless self-promotion: I was just named one of Bigtime’s 100 most influential people.” Or the softer sell, the just-lucky-to-be-here approach: “I am grateful to be included in this year’s 100 most influential people.” Or the mock-surprise approach: “I’m chuckling — according to Bigtime magazine, I’m a top 100 most influential person.”
Perhaps oddest of all, considering its real-life parallel, is the retweet-without-comment: “RT: @longhornfan43: Evan Ratliff named in Bigtime magazine 100 most influential people.” Avoid this one. Imagine using a lull in dinner party conversation to announce that “a man in Texas, whom none of you know, recently told his friends I was named to the Bigtime 100. Salad, anyone?”
Immodesty thrives on Facebook and Twitter because they enable what social scientists call self-enhancement — the human tendency to oversell ourselves. But they also nurture a sense of mutual admiration that the offline world often does not. Social networking tends to create self-reinforcing spirals of reciprocal kindness. You like my cat pictures, so I celebrate your job promotion. The incentives tend to be stacked against negativity, and in some cases implicitly discourage it. In the Facebook world, we can Like or Hide things, but there’s no Dislike button — even when you need one.
In fact, James Fowler, a political scientist at UC San Diego who studies social networks both online and off, has shown that positive networks built on cooperation and altruism tend to thrive, while negative ones tend to dissolve. “Apparently, evolution favors behaviors that cause us to disconnect from mean people,” he says.
And why not? In a modern world that bombards us with reasons to feel bad about ourselves, maybe there’s room for a little extra public celebration when things go well. Online, we’re safe to note our achievements, our loves, our tiny daily triumphs in a bid for a little positive feedback. So go ahead and, as the marketing gurus say, tend the Brand of You. Just don’t be me-first. Roll as many logs to others as you do back to yourself. Promote those deserving friends too humble to promote themselves and you’ll be tending the entire social-network ecosystem.
But if you’re inclined to turn your feed into a virtual trophy case, remember that followers aren’t the same as listeners. You could be one self-enhancement away from the Ignore list. That drinking group could silently be opting out of Brand You and decamping for a walled garden on Ning. You’d never even know.

Wired Magazine

Sunday, February 20, 2011

Secrets of a mind gamer


Dom DeLuise, the comedian ( and five of clubs ), was implicated in the following unseemly acts in my mind’s eye: He hocked a fat globule of spittle ( nine of clubs ) on Albert Einstein’s thick white mane ( three of diamonds ) and delivered a devastating karate kick ( five of spades ) to the groin of Pope Benedict XVI ( six of diamonds ). Michael Jackson ( king of hearts ) engaged in behavior bizarre even for him. He defecated ( two of clubs ) on a salmon burger ( king of clubs ) and captured his flatulence ( queen of clubs ) in a balloon ( six of spades ).

This tawdry tableau, which I’m not proud to commit to the page, goes a long way toward explaining the unexpected spot in which I found myself in the spring of 2006. Sitting to my left was Ram Kolli, an unshaven 25-year-old business consultant from Richmond, Va., who was also the defending United States memory champion. To my right was the lens of a television camera from a national cable network. Spread out behind me, where I couldn’t see them and they couldn’t disturb me, were about 100 spectators and a pair of TV commentators offering play-by-play analysis. One was a blow-dried mixed martial arts announcer named Kenny Rice, whose gravelly, bedtime voice couldn’t conceal the fact that he seemed bewildered by this jamboree of nerds. The other was the Pelé of U.S. memory sport, a bearded 43-year-old chemical engineer and four-time national champion from Fayetteville, N.C., named Scott Hagwood. In the corner of the room sat the object of my affection: a kitschy, two-tiered trophy of a silver hand with gold nail polish brandishing a royal flush. It was almost as tall as my 2-year-old niece (if lighter than most of her stuffed animals).

The audience was asked not to take any flash photographs and to maintain total silence. Not that Kolli or I could possibly have heard them. Both of us were wearing earplugs. I also had on a pair of industrial-strength earmuffs that looked as if they belonged to an aircraft-carrier deckhand (in the heat of a memory competition, there is no such thing as deaf enough). My eyes were closed. On a table in front of me, lying face down between my hands, were two shuffled decks of playing cards. In a moment, the chief arbiter would click a stopwatch, and I would have five minutes to memorize the order of both decks.

The unlikely story of how I ended up in the finals of the U.S.A. Memory Championship, stock-still and sweating profusely, began a year earlier in the same auditorium, on the 19th floor of the Con Edison building near Union Square in Manhattan. I was there to write a short article about what I imagined would be the Super Bowl of savants.

The scene I stumbled upon, however, was something less than a clash of titans: a bunch of guys (and a few women), varying widely in age and personal grooming habits, poring over pages of random numbers and long lists of words. They referred to themselves as mental athletes, or M.A.’s for short. The best among them could memorize the first and last names of dozens of strangers in just a few minutes, thousands of random digits in under an hour and — to impress those with a more humanistic bent — any poem you handed them.

I asked Ed Cooke, a competitor from England — he was 24 at the time and was attending the U.S. event to train for that summer’s World Memory Championships — when he first realized he was a savant.

“Oh, I’m not a savant,” he said, chuckling.

“Photographic memory?” I asked.

He chuckled again. “Photographic memory is a detestable myth. Doesn’t exist. In fact, my memory is quite average. All of us here have average memories.”

That seemed hard to square with the fact that he knew huge chunks of “Paradise Lost” by heart. Earlier I watched him recite a list of 252 random digits as effortlessly as if it were his telephone number.

“What you have to understand is that even average memories are remarkably powerful if used properly,” Cooke said. He explained to me that mnemonic competitors saw themselves as “participants in an amateur research program” whose aim is to rescue a long-lost tradition of memory training.

Today we have books, photographs, computers and an entire superstructure of external devices to help us store our memories outside our brains, but it wasn’t so long ago that culture depended on individual memories. A trained memory was not just a handy tool but also a fundamental facet of any worldly mind. It was considered a form of character-building, a way of developing the cardinal virtue of prudence and, by extension, ethics. Only through memorizing, the thinking went, could ideas be incorporated into your psyche and their values absorbed.

Cooke was wearing a suit with a loosened tie, his curly brown hair cut in a shoulder-length mop, and, incongruously, a pair of flip-flops emblazoned with the Union Jack. He was a founding member of a secret society of memorizers called the KL7 and was at that time pursuing a Ph.D. in cognitive science at the University of Paris. He was also working on inventing a new color — “not just a new color, but a whole new way of seeing color.”

Cooke and all the other mental athletes I met kept insisting that anyone could do what they do. It was simply a matter of learning to “think in more memorable ways,” using a set of mnemonic techniques almost all of which were invented in ancient Greece. These techniques existed not to memorize useless information like decks of playing cards but to etch into the brain foundational texts and ideas.

It was an attractive fantasy. If only I could learn to remember like Cooke, I figured, I would be able to commit reams of poetry to heart and really absorb it. I imagined being one of those admirable (if sometimes insufferable) individuals who always has an apposite quotation to drop into conversation. How many worthwhile ideas have gone unthought and connections unmade because of my memory’s shortcomings?

At the time, I didn’t quite believe Cooke’s bold claims about the latent mnemonic potential in all of us. But they seemed worth investigating. Cooke offered to serve as my coach and trainer. Memorizing would become a part of my daily routine. Like flossing. Except that I would actually remember to do it.

In 2003, the journal Nature reported on eight people who finished near the top of the World Memory Championships. The study looked at whether the memorizers’ brains were structurally different from the rest of ours or whether they were just making better use of the memorizing abilities we all possess.

Researchers put the mental athletes and a group of control subjects into f.M.R.I. scanners and asked them to memorize three-digit numbers, black-and-white photographs of people’s faces and magnified images of snowflakes as their brains were being scanned. What they found was surprising: not only did the brains of the mental athletes appear anatomically indistinguishable from those of the control subjects, but on every test of general cognitive ability, the mental athletes’ scores came back well within the normal range. When Cooke told me he was an average guy with an average memory, it wasn’t just modesty speaking.

There was, however, one telling difference between the brains of the mental athletes and those of the control subjects. When the researchers looked at the parts of the brain that were engaged when the subjects memorized, they found that the mental athletes were relying more heavily on regions known to be involved in spatial memory. At first glance, this didn’t seem to make sense. Why would mental athletes be navigating spaces in their minds while trying to learn three-digit numbers?

The answer lies in a discovery supposedly made by the poet Simonides of Ceos in the fifth century B.C. After a tragic banquet-hall collapse, of which he was the sole survivor, Simonides was asked to give an account of who was buried in the debris.

My trainer and all the other mental athletes I met kept insisting that anyone could do what they do. It was simply a matter of learning to ‘think in more memorable ways.’

When the poet closed his eyes and reconstructed the crumbled building in his imagination, he had an extraordinary realization: he remembered where each of the guests at the ill-fated dinner had been sitting. Even though he made no conscious effort to memorize the layout of the room, it nonetheless left a durable impression. From that simple observation, Simonides reportedly invented a technique that would form the basis of what came to be known as the art of memory. He realized that if there hadn’t been guests sitting at a banquet table but, say, every great Greek dramatist seated in order of birth — or each of the words of one of his poems or every item he needed to accomplish that day — he would have remembered that instead. He reasoned that just about anything could be imprinted upon our memories, and kept in good order, simply by constructing a building in the imagination and filling it with imagery of what needed to be recalled. This imagined edifice could then be walked through at any time in the future. Such a building would later come to be called a memory palace.

Virtually all the details we have about classical memory training — indeed, nearly all the memory tricks in the competitive mnemonist’s arsenal — can be traced to a short Latin rhetoric textbook called “Rhetorica ad Herennium,” written sometime between 86 and 82 B.C. It is the only comprehensive discussion of the memory techniques attributed to Simonides to have survived into the Middle Ages. The techniques described in this book were widely practiced in the ancient and medieval worlds. Memory training was considered a centerpiece of classical education in the language arts, on par with grammar, logic and rhetoric. Students were taught not just what to remember but how to remember it. In a world with few books, memory was sacrosanct.

Living as we do amid a deluge of printed words — would you believe more than a million new books were published last year? — it’s hard to imagine what it must have been like to read in the age before Gutenberg, when a book was a rare and costly handwritten object that could take a scribe months of labor to produce. Today we write things down precisely so we don’t have to remember them, but through the late Middle Ages, books were thought of not just as replacements for memory but also as aides-mémoire. Even as late as the 14th century, there might be just several dozen copies of any given text in existence, and those copies might well be chained to a desk or a lectern in some library, which, if it contained a hundred other books, would have been considered particularly well stocked. If you were a scholar, you knew that there was a reasonable likelihood you would never see a particular text again, so a high premium was placed on remembering what you read.

To our memorybound predecessors, the goal of training your memory was not to become a “living book” but rather a “living concordance,” writes the historian Mary Carruthers, a walking index of everything read or learned that was considered worthwhile. And this required building an organizational scheme for accessing that information. When the point of reading is remembering, you approach a text very differently from the way most of us do today. You can’t read as fast as you’re probably reading this article and expect to remember what you’ve read for any considerable length of time. If something is going to be made memorable, it has to be dwelled upon, repeated.

In his essay “First Steps Toward a History of Reading,” Robert Darnton describes a switch from “intensive” to “extensive” reading that occurred as printed books began to proliferate. Until relatively recently, people read “intensively,” Darnton says. “They had only a few books — the Bible, an almanac, a devotional work or two — and they read them over and over again, usually aloud and in groups, so that a narrow range of traditional literature became deeply impressed on their consciousness.” Today we read books “extensively,” often without sustained focus, and with rare exceptions we read each book only once. We value quantity of reading over quality of reading. We have no choice, if we want to keep up with the broader culture. I always find looking up at my shelves, at the books that have drained so many of my waking hours, to be a dispiriting experience. There are books up there that I can’t even remember whether I’ve read or not.

Attention, of course, is a prerequisite to remembering. Part of the reason that techniques like visual imagery and the memory palace work so well is that they enforce a degree of mindfulness that is normally lacking. If you want to use a memory palace for permanent storage, you have to take periodic time-consuming mental strolls through it to keep your images from fading. Mostly, nobody bothers. In fact, mnemonists deliberately empty their palaces after competitions, so they can reuse them again and again.

“Rhetorica ad Herennium” underscores the importance of purposeful attention by making a distinction between natural memory and artificial memory: “The natural memory is that memory which is embedded in our minds, born simultaneously with thought. The artificial memory is that memory which is strengthened by a kind of training and system of discipline.” In other words, natural memory is the hardware you’re born with. Artificial memory is the software you run on it.

The principle underlying most memory techniques is that our brains don’t remember every type of information equally well. Like every other one of our biological faculties, our memories evolved through a process of natural selection in an environment that was quite different from the one we live in today. And much as our taste for sugar and fat may have served us well in a world of scarce nutrition but is maladaptive in a world of ubiquitous fast-food joints, our memories aren’t perfectly suited for our contemporary information age. Our hunter-gatherer ancestors didn’t need to recall phone numbers or word-for-word instructions from their bosses or the Advanced Placement U.S. history curriculum or (because they lived in relatively small, stable groups) the names of dozens of strangers at a cocktail party. What they did need to remember was where to find food and resources and the route home and which plants were edible and which were poisonous. Those are the sorts of vital memory skills that they depended on, which probably helps explain why we are comparatively good at remembering visually and spatially.

In a famous experiment carried out in the 1970s, researchers asked subjects to look at 10,000 images just once and for just five seconds each. (It took five days to perform the test.) Afterward, when they showed the subjects pairs of pictures — one they looked at before and one they hadn’t — they found that people were able to remember more than 80 percent of what they had seen. For all of our griping over the everyday failings of our memories — the misplaced keys, the forgotten name, the factoid stuck on the tip of the tongue — our biggest failing may be that we forget how rarely we forget. The point of the memory techniques described in “Rhetorica ad Herennium” is to take the kinds of memories our brains aren’t that good at holding onto and transform them into the kinds of memories our brains were built for. It advises creating memorable images for your palaces: the funnier, lewder and more bizarre, the better. “When we see in everyday life things that are petty, ordinary and banal, we generally fail to remember them. . . . But if we see or hear something exceptionally base, dishonorable, extraordinary, great, unbelievable or laughable, that we are likely to remember for a long time.”

What distinguishes a great mnemonist, I learned, is the ability to create lavish images on the fly, to paint in the mind a scene so unlike any other it cannot be forgotten. And to do it quickly. Many competitive mnemonists argue that their skills are less a feat of memory than of creativity. For example, one of the most popular techniques used to memorize playing

The point of memory techniques to take the kinds of memories our brains aren’t that good at holding onto and transform them into the kinds of memories our brains were built for.

cards involves associating every card with an image of a celebrity performing some sort of a ludicrous — and therefore memorable — action on a mundane object. When it comes time to remember the order of a series of cards, those memorized images are shuffled and recombined to form new and unforgettable scenes in the mind’s eye. Using this technique, Ed Cooke showed me how an entire deck can be quickly transformed into a comically surreal, and unforgettable, memory palace.

But mental athletes don’t merely embrace the practice of the ancients. The sport of competitive memory is driven by an arms race of sorts. Each year someone — usually a competitor who is temporarily underemployed or a student on summer vacation — comes up with a more elaborate technique for remembering more stuff more quickly, forcing the rest of the field to play catch-up. In order to remember digits, for example, Cooke recently invented a code that allows him to convert every number from 0 to 999,999,999 into a unique image that he can then deposit in a memory palace.

Memory palaces don’t have to be palatial — or even actual buildings. They can be routes through a town or signs of the zodiac or even mythical creatures. They can be big or small, indoors or outdoors, real or imaginary, so long as they are intimately familiar. The four-time U.S. memory champion Scott Hagwood uses luxury homes featured in Architectural Digest to store his memories. Dr. Yip Swee Chooi, the effervescent Malaysian memory champ, used his own body parts to help him memorize the entire 57,000-word Oxford English-Chinese dictionary. In the 15th century, an Italian jurist named Peter of Ravenna is said to have used thousands of memory palaces to store quotations on every important subject, classified alphabetically. When he wished to expound on a given topic, he simply reached into the relevant chamber and pulled out the source. “When I left my country to visit as a pilgrim the cities of Italy, I can truly say I carried everything I owned with me,” he wrote.

When I first set out to train my memory, the prospect of learning these elaborate techniques seemed preposterously daunting. One of my first steps was to dive into the scientific literature for help. One name kept popping up: K. Anders Ericsson, a psychology professor at Florida State University and the author of an article titled “Exceptional Memorizers: Made, Not Born.”

Ericsson laid the foundation for what’s known as Skilled Memory Theory, which explains how and why our memories can be improved, within limits. In 1978, he and a fellow psychologist named Bill Chase conducted what became a classic experiment on a Carnegie Mellon undergraduate student, who was immortalized as S.F. in the literature. Chase and Ericsson paid S.F. to spend several hours a week in their lab taking a simple memory test again and again. S.F. sat in a chair and tried to remember as many numbers as possible as they were read off at the rate of one per second. At the outset, he could hold only about seven digits at a time in his head. When the experiment wrapped up — two years and 250 mind-numbing hours later — S.F. had increased his ability to remember numbers by a factor of 10.

When I called Ericsson and told him that I was trying to train my memory, he said he wanted to make me his research subject. We struck a deal. I would give him the records of my training, which might prove useful for his research. In return, he and his graduate students would analyze the data in search of how I might perform better. Ericsson encouraged me to think of enhancing my memory in the same way I would think about improving any other skill, like learning to play an instrument. My first assignment was to begin collecting architecture. Before I could embark on any serious degree of memory training, I first needed a stockpile of palaces at my disposal. I revisited the homes of old friends and took walks through famous museums, and I built entirely new, fantastical structures in my imagination. And then I carved each building up into cubbyholes for my memories.

Cooke kept me on a strict training regimen. Each morning, after drinking coffee but before reading the newspaper or showering or getting dressed, I sat at my desk for 10 to 15 minutes to work through a poem or memorize the names in an old yearbook. Rather than take a magazine or book along with me on the subway, I would whip out a page of random numbers or a deck of playing cards and try to commit it to memory. Strolls around the neighborhood became an excuse to memorize license plates. I began to pay a creepy amount of attention to name tags. I memorized my shopping lists. Whenever someone gave me a phone number, I installed it in a special memory palace. Over the next several months, while I built a veritable metropolis of memory palaces and stocked them with strange and colorful images, Ericsson kept tabs on my development. When I got stuck, I would call him for advice, and he would inevitably send me scurrying for some journal article that he promised would help me understand my shortcomings. At one point, not long after I started training, my memory stopped improving. No matter how much I practiced, I couldn’t memorize playing cards any faster than 1 every 10 seconds. I was stuck in a rut, and I couldn’t figure out why. “My card times have hit a plateau,” I lamented.

“I would recommend you check out the literature on speed typing,” he replied.

When people first learn to use a keyboard, they improve very quickly from sloppy single-finger pecking to careful two-handed typing, until eventually the fingers move effortlessly and the whole process becomes unconscious. At this point, most people’s typing skills stop progressing. They reach a plateau. If you think about it, it’s strange. We’ve always been told that practice makes perfect, and yet many people sit behind a keyboard for hours a day. So why don’t they just keeping getting better and better?

In the 1960s, the psychologists Paul Fitts and Michael Posner tried to answer this question by describing the three stages of acquiring a new skill. During the first phase, known as the cognitive phase, we intellectualize the task and discover new strategies to accomplish it more proficiently. During the second, the associative phase, we concentrate less, making fewer major errors, and become more efficient. Finally we reach what Fitts and Posner called the autonomous phase, when we’re as good as we need to be at the task and we basically run on autopilot. Most of the time that’s a good thing. The less we have to focus on the repetitive tasks of everyday life, the more we can concentrate on the stuff that really matters. You can actually see this phase shift take place in f.M.R.I.’s of subjects as they learn new tasks: the parts of the brain involved in conscious reasoning become less active, and other parts of the brain take over. You could call it the O.K. plateau.

Psychologists used to think that O.K. plateaus marked the upper bounds of innate ability. In his 1869 book “Hereditary Genius,” Sir Francis Galton argued that a person could improve at mental and physical activities until he hit a wall, which “he cannot by any education or exertion overpass.” In other words, the best we can do is simply the best we can do. But Ericsson and his colleagues have found over and over again that with the right kind of effort, that’s rarely the case. They believe that Galton’s wall often has much less to do with our innate limits than with what we consider an acceptable level of performance. They’ve found that top achievers typically follow the same general pattern. They develop strategies for keeping out of the autonomous stage by doing three things: focusing on their technique, staying goal-oriented and getting immediate feedback on their performance. Amateur musicians, for example, tend to spend their practice time playing music, whereas pros tend to work through tedious exercises or focus on difficult parts of pieces. Similarly, the best ice skaters spend more of their practice time trying jumps that they land less often, while lesser skaters work more on jumps they’ve already mastered. In other words, regular practice simply isn’t enough.

For all of our griping over our failing memories — the misplaced keys, the forgotten name, the factoid stuck on the tip of the tongue — our biggest failing may be that we forget how rarely we forget.

To improve, we have to be constantly pushing ourselves beyond where we think our limits lie and then pay attention to how and why we fail. That’s what I needed to do if I was going to improve my memory.

With typing, it’s relatively easy to get past the O.K. plateau. Psychologists have discovered that the most efficient method is to force yourself to type 10 to 20 percent faster than your comfort pace and to allow yourself to make mistakes. Only by watching yourself mistype at that faster speed can you figure out the obstacles that are slowing you down and overcome them. Ericsson suggested that I try the same thing with cards. He told me to find a metronome and to try to memorize a card every time it clicked. Once I figured out my limits, he instructed me to set the metronome 10 to 20 percent faster and keep trying at the quicker pace until I stopped making mistakes. Whenever I came across a card that was particularly troublesome, I was supposed to make a note of it and see if I could figure out why it was giving me cognitive hiccups. The technique worked, and within a couple days I was off the O.K. plateau, and my card times began falling again at a steady clip. Before long, I was committing entire decks to memory in just a few minutes.

More than anything, what differentiates top memorizers from the second tier is that they approach memorization like a science. They develop hypotheses about their limitations; they conduct experiments and track data. “It’s like you’re developing a piece of technology or working on a scientific theory,” the three-time world champ Andi Bell once told me. “You have to analyze what you’re doing.”

To have a chance at catapulting myself to the top tier of the competitive memorization circuit, my practice would have to be focused and deliberate. That meant I needed to collect data and analyze it for ways to tweak the images in my memory palaces and make them stickier.

Cooke, who took to referring to me as “son,” “young man” and “Herr Foer,” insisted that if I really wanted to ratchet up my training, I would need an equipment upgrade. All serious mnemonists wear earmuffs. A few of the most intense competitors wear blinders to constrict their field of view and shut out peripheral distractions.

“I find them ridiculous, but in your case, they may be a sound investment,” Cooke said on one of our twice-weekly phone check-ins. That afternoon, I went to the hardware store and bought a pair of industrial-grade earmuffs and a pair of plastic laboratory safety goggles. I spray-painted them black and drilled a small eyehole through each lens. Henceforth I would always wear them to practice.

What began as an exercise in participatory journalism became an obsession. True, what I hoped for before I started hadn’t come to pass: these techniques didn’t improve my underlying memory (the “hardware” of “Rhetorica ad Herennium”). I still lost my car keys. And I was hardly a fount of poetry. Even once I was able to squirrel away more than 30 digits a minute in memory palaces, I seldom memorized the phone numbers of people I actually wanted to call. It was easier to punch them into my cellphone. The techniques worked; I just didn’t always use them. Why bother when there’s paper, a computer or a cellphone to remember for you?

Yet, as the next U.S.A. Memory Championship approached, I began to suspect that I might actually have a chance of doing pretty well in it. In every event except the poem and speed numbers (which tests how many random digits you can memorize in five minutes) my best practice scores were approaching the top marks of previous U.S. champions. Cooke told me not to make too much of that fact. “You always do at least 20 percent worse under the lights,” he said, and he warned me about the “lackadaisical character” of my training.

“Lackadaisical” wasn’t the word I would have chosen. Now that I had put the O.K. plateau behind me, my scores were improving on an almost daily basis. The sheets of random numbers that I memorized were piling up in the drawer of my desk. The dog-eared pages of verse I learned by heart were accumulating in my “Norton Anthology of Modern Poetry.” To buoy my spirits, Cooke sent me a quotation from the venerable martial artist Bruce Lee: “There are no limits. There are plateaus, and you must not stay there; you must go beyond them. If it kills you, it kills you.” I copied that thought onto a Post-it note and stuck it on my wall. Then I tore it down and memorized it.

Most national memory contests, held in places like Bangkok, Melbourne and Hamburg, bill themselves as mental decathlons. Ten grueling events test the competitors’ memories, each in a slightly different way. Contestants have to memorize an unpublished poem spanning several pages, pages of random words (record: 280 in 15 minutes), lists of binary digits (record: 4,140 in 30 minutes), shuffled decks of playing cards, a list of historical dates and the names and faces of as many strangers as possible. Some disciplines, called speed events, test how much the contestants can memorize in five minutes (record: 480 digits). Two marathon disciplines test how many decks of cards and random digits they can memorize in an hour (records: 2,080 digits and 28 decks). In the most exciting event of the contest, speed cards, competitors race to commit a single pack of playing cards to memory as fast as possible.

When I showed up at the following year’s U.S.A. Memory Championship, I brought along my black spray-painted memory goggles for speed cards. Until the moment a freshly shuffled deck was placed on the desk in front of me, I was still weighing whether to put them on. I hadn’t practiced without my goggles in weeks, and the Con Edison auditorium was full of distractions. But there were also three television cameras circulating in the room. As one of them zoomed in for a close-up of my face, I thought of all the people I knew who might end up watching the broadcast: high-school classmates I hadn’t seen in years, friends who had no idea about my new memory obsession, my girlfriend’s parents. What would they think if they turned on their televisions and saw me wearing huge black safety goggles and earmuffs, thumbing through a deck of playing cards? In the end, my fear of public embarrassment trumped my competitive instincts.

From the front of the room, the chief arbiter, a former Army drill sergeant, shouted, “Go!” A judge sitting opposite me clicked her stopwatch, and I began peeling through the pack as fast as I could, flicking three cards at a time off the top of the deck and into my right hand. I was storing the images in the memory palace I knew better than any other, one based on the house in Washington in which I grew up. Inside the front door, the Incredible Hulk rode a stationary bike while a pair of oversize, loopy earrings weighed down his earlobes (three of clubs, seven of diamonds, jack of spades). Next to the mirror at the bottom of the stairs, Terry Bradshaw balanced on a wheelchair (seven of hearts, nine of diamonds, eight of hearts), and just behind him, a midget jockey in a sombrero parachuted from an airplane with an umbrella (seven of spades, eight of diamonds, four of clubs). I saw Jerry Seinfeld sprawled out bleeding on the hood of a Lamborghini in the hallway (five of hearts, ace of diamonds, jack of hearts), and at the foot of my parents’ bedroom door, I saw myself moonwalking with Einstein (four of spades, king of hearts, three of diamonds).

The art of speed cards lies in finding the perfect balance between moving quickly and forming detailed images. You want a large enough glimpse of your images to be able to reconstruct them later, without wasting precious time conjuring any more color than necessary. When I put my palms back down on the table to stop the clock, I knew that I’d hit a sweet spot in that balance. But I didn’t yet know how sweet.

The judge, who was sitting opposite me, flashed me the time on her stopwatch: 1 minute 40 seconds. I immediately recognized that not only was that better than anything I ever did in practice but that it also would shatter the United States record of 1 minute 55 seconds. I closed my eyes, put my head down on the table, whispered an expletive to myself and took a second to dwell on the fact that I had possibly just done something — however geeky, however trivial — better than it had ever been done by anyone in the entire country.

(By the standards of the international memory circuit, where 21.9 seconds is the best time, my 1:40 would have been considered middling — the equivalent of a 5-minute mile for the best Germans, British and Chinese.)

As word of my time traveled across the room, cameras and spectators began to assemble around my desk. The judge pulled out a second unshuffled deck of playing cards and pushed them across the table. My task now was to rearrange that pack to match the one I just memorized.

I fanned the cards out, took a deep breath and walked through my palace one more time. I could see all the images perched exactly where I left them, except for one. It should have been in the shower, dripping wet, but all I could spy were blank beige tiles.

“I can’t see it,” I whispered to myself frantically. “I can’t see it.” I ran through every single one of my images as fast as I could. Had I forgotten the fop wearing an ascot? Pamela Anderson’s rack? The Lucky Charms leprechaun? An army of turbaned Sikhs? No, no, no, no.

I began sliding the cards around the table with my index finger. Near the top of the desk, I put the Hulk on his bike. Next to that, I placed Terry Bradshaw with his wheelchair. As the clock ran down on my five minutes of recall time, I was left with three cards. They were the three that had disappeared from the shower: the king of diamonds, four of hearts and seven of clubs. Bill Clinton copulating with a basketball. How could I have possibly missed it?

I quickly neatened up the stack of cards into a square pile, shoved them back across the table to the judge and removed my earmuffs and earplugs.

One of the television cameras circled around for a better angle. The judge began flipping the cards over one by one, while, for dramatic effect, I did the same with the deck I’d memorized.

Two of hearts, two of hearts. Two of diamonds, two of diamonds. Three of hearts, three of hearts. Card by card, each one matched. When we got to the end of the decks, I threw the last card down on the table and pumped my fist. I was the new U.S. record holder in speed cards. A 12-year-old boy stepped forward, handed me a pen and asked for my autograph.

Joshua Foer is the author of “Moonwalking With Einstein: The Art and Science of Remembering Everything,” from which this article is adapted, to be published by Penguin Press next month.

New York Times Magazine

Thursday, February 10, 2011

Author Nicholas Carr: The Web Shatters Focus, Rewires Brains

During the winter of 2007, a UCLA professor of psychiatry named Gary Small recruited six volunteers—three experienced Web surfers and three novices—for a study on brain activity. He gave each a pair of goggles onto which Web pages could be projected. Then he slid his subjects, one by one, into the cylinder of a whole-brain magnetic resonance imager and told them to start searching the Internet. As they used a handheld keypad to Google various preselected topics—the nutritional benefits of chocolate, vacationing in the Galapagos Islands, buying a new car—the MRI scanned their brains for areas of high activation, indicated by increases in blood flow.

The two groups showed marked differences. Brain activity of the experienced surfers was far more extensive than that of the newbies, particularly in areas of the prefrontal cortex associated with problem-solving and decisionmaking. Small then had his subjects read normal blocks of text projected onto their goggles; in this case, scans revealed no significant difference in areas of brain activation between the two groups. The evidence suggested, then, that the distinctive neural pathways of experienced Web users had developed because of their Internet use.

The most remarkable result of the experiment emerged when Small repeated the tests six days later. In the interim, the novices had agreed to spend an hour a day online, searching the Internet. The new scans revealed that their brain activity had changed dramatically; it now resembled that of the veteran surfers. “Five hours on the Internet and the naive subjects had already rewired their brains,” Small wrote. He later repeated all the tests with 18 more volunteers and got the same results.

When first publicized, the findings were greeted with cheers. By keeping lots of brain cells buzzing, Google seemed to be making people smarter. But as Small was careful to point out, more brain activity is not necessarily better brain activity. The real revelation was how quickly and extensively Internet use reroutes people’s neural pathways. “The current explosion of digital technology not only is changing the way we live and communicate,” Small concluded, “but is rapidly and profoundly altering our brains.”

What kind of brain is the Web giving us? That question will no doubt be the subject of a great deal of research in the years ahead. Already, though, there is much we know or can surmise—and the news is quite disturbing. Dozens of studies by psychologists, neurobiologists, and educators point to the same conclusion: When we go online, we enter an environment that promotes cursory reading, hurried and distracted thinking, and superficial learning. Even as the Internet grants us easy access to vast amounts of information, it is turning us into shallower thinkers, literally changing the structure of our brain.

Back in the 1980s, when schools began investing heavily in computers, there was much enthusiasm about the apparent advantages of digital documents over paper ones. Many educators were convinced that introducing hyperlinks into text displayed on monitors would be a boon to learning. Hypertext would strengthen critical thinking, the argument went, by enabling students to switch easily between different viewpoints. Freed from the lockstep reading demanded by printed pages, readers would make all sorts of new intellectual connections between diverse works. The hyperlink would be a technology of liberation.

By the end of the decade, the enthusiasm was turning to skepticism. Research was painting a fuller, very different picture of the cognitive effects of hypertext. Navigating linked documents, it turned out, entails a lot of mental calisthenics—evaluating hyperlinks, deciding whether to click, adjusting to different formats—that are extraneous to the process of reading. Because it disrupts concentration, such activity weakens comprehension. A 1989 study showed that readers tended just to click around aimlessly when reading something that included hypertext links to other selected pieces of information. A 1990 experiment revealed that some “could not remember what they had and had not read.”

Even though the World Wide Web has made hypertext ubiquitous and presumably less startling and unfamiliar, the cognitive problems remain. Research continues to show that people who read linear text comprehend more, remember more, and learn more than those who read text peppered with links. In a 2001 study, two scholars in Canada asked 70 people to read “The Demon Lover,” a short story by Elizabeth Bowen. One group read it in a traditional linear-text format; they’d read a passage and click the word next to move ahead. A second group read a version in which they had to click on highlighted words in the text to move ahead. It took the hypertext readers longer to read the document, and they were seven times more likely to say they found it confusing. Another researcher, Erping Zhu, had people read a passage of digital prose but varied the number of links appearing in it. She then gave the readers a multiple-choice quiz and had them write a summary of what they had read. She found that comprehension declined as the number of links increased—whether or not people clicked on them. After all, whenever a link appears, your brain has to at least make the choice not to click, which is itself distracting.

A 2007 scholarly review of hypertext experiments concluded that jumping between digital documents impedes understanding. And if links are bad for concentration and comprehension, it shouldn’t be surprising that more recent research suggests that links surrounded by images, videos, and advertisements could be even worse.

In a study published in the journal Media Psychology, researchers had more than 100 volunteers watch a presentation about the country of Mali, played through a Web browser. Some watched a text-only version. Others watched a version that incorporated video. Afterward, the subjects were quizzed on the material. Compared to the multimedia viewers, the text-only viewers answered significantly more questions correctly; they also found the presentation to be more interesting, more educational, more understandable, and more enjoyable.

The depth of our intelligence hinges on our ability to transfer information from working memory, the scratch pad of consciousness, to long-term memory, the mind’s filing system. When facts and experiences enter our long-term memory, we are able to weave them into the complex ideas that give richness to our thought. But the passage from working memory to long-term memory also forms a bottleneck in our brain. Whereas long-term memory has an almost unlimited capacity, working memory can hold only a relatively small amount of information at a time. And that short-term storage is fragile: A break in our attention can sweep its contents from our mind.

Imagine filling a bathtub with a thimble; that’s the challenge involved in moving information from working memory into long-term memory. When we read a book, the information faucet provides a steady drip, which we can control by varying the pace of our reading. Through our single-minded concentration on the text, we can transfer much of the information, thimbleful by thimbleful, into long-term memory and forge the rich associations essential to the creation of knowledge and wisdom.

On the Net, we face many information faucets, all going full blast. Our little thimble overflows as we rush from tap to tap. We transfer only a small jumble of drops from different faucets, not a continuous, coherent stream.

Psychologists refer to the information flowing into our working memory as our cognitive load. When the load exceeds our mind’s ability to process and store it, we’re unable to retain the information or to draw connections with other memories. We can’t translate the new material into conceptual knowledge. Our ability to learn suffers, and our understanding remains weak. That’s why the extensive brain activity that Small discovered in Web searchers may be more a cause for concern than for celebration. It points to cognitive overload.

The Internet is an interruption system. It seizes our attention only to scramble it. There’s the problem of hypertext and the many different kinds of media coming at us simultaneously. There’s also the fact that numerous studies—including one that tracked eye movement, one that surveyed people, and even one that examined the habits displayed by users of two academic databases—show that we start to read faster and less thoroughly as soon as we go online. Plus, the Internet has a hundred ways of distracting us from our onscreen reading. Most email applications check automatically for new messages every five or 10 minutes, and people routinely click the Check for New Mail button even more frequently. Office workers often glance at their inbox 30 to 40 times an hour. Since each glance breaks our concentration and burdens our working memory, the cognitive penalty can be severe.

The penalty is amplified by what brain scientists call switching costs. Every time we shift our attention, the brain has to reorient itself, further taxing our mental resources. Many studies have shown that switching between just two tasks can add substantially to our cognitive load, impeding our thinking and increasing the likelihood that we’ll overlook or misinterpret important information. On the Internet, where we generally juggle several tasks, the switching costs pile ever higher.

The Net’s ability to monitor events and send out messages and notifications automatically is, of course, one of its great strengths as a communication technology. We rely on that capability to personalize the workings of the system, to program the vast database to respond to our particular needs, interests, and desires. We want to be interrupted, because each interruption—email, tweet, instant message, RSS headline—brings us a valuable piece of information. To turn off these alerts is to risk feeling out of touch or even socially isolated. The stream of new information also plays to our natural tendency to overemphasize the immediate. We crave the new even when we know it’s trivial.

And so we ask the Internet to keep interrupting us in ever more varied ways. We willingly accept the loss of concentration and focus, the fragmentation of our attention, and the thinning of our thoughts in return for the wealth of compelling, or at least diverting, information we receive. We rarely stop to think that it might actually make more sense just to tune it all out.

The mental consequences of our online info-crunching are not universally bad. Certain cognitive skills are strengthened by our use of computers and the Net. These tend to involve more primitive mental functions, such as hand-eye coordination, reflex response, and the processing of visual cues. One much-cited study of videogaming, published in Nature in 2003, revealed that after just 10 days of playing action games on computers, a group of young people had significantly boosted the speed with which they could shift their visual focus between various images and tasks.

It’s likely that Web browsing also strengthens brain functions related to fast-paced problem-solving, particularly when it requires spotting patterns in a welter of data. A British study of the way women search for medical information online indicated that an experienced Internet user can, at least in some cases, assess the trustworthiness and probable value of a Web page in a matter of seconds. The more we practice surfing and scanning, the more adept our brain becomes at those tasks. (Other academics, like Clay Shirky, maintain that the Web provides us with a valuable outlet for a growing “cognitive surplus”; see Cognitive Surplus: The Great Spare-Time Revolution

But it would be a serious mistake to look narrowly at such benefits and conclude that the Web is making us smarter. In a Science article published in early 2009, prominent developmental psychologist Patricia Greenfield reviewed more than 40 studies of the effects of various types of media on intelligence and learning ability. She concluded that “every medium develops some cognitive skills at the expense of others.” Our growing use of the Net and other screen-based technologies, she wrote, has led to the “widespread and sophisticated development of visual-spatial skills.” But those gains go hand in hand with a weakening of our capacity for the kind of “deep processing” that underpins “mindful knowledge acquisition, inductive analysis, critical thinking, imagination, and reflection.”

We know that the human brain is highly plastic; neurons and synapses change as circumstances change. When we adapt to a new cultural phenomenon, including the use of a new medium, we end up with a different brain, says Michael Merzenich, a pioneer of the field of neuroplasticity. That means our online habits continue to reverberate in the workings of our brain cells even when we’re not at a computer. We’re exercising the neural circuits devoted to skimming and multitasking while ignoring those used for reading and thinking deeply.

Last year, researchers at Stanford found signs that this shift may already be well under way. They gave a battery of cognitive tests to a group of heavy media multitaskers as well as a group of relatively light ones. They discovered that the heavy multitaskers were much more easily distracted, had significantly less control over their working memory, and were generally much less able to concentrate on a task. Intensive multitaskers are “suckers for irrelevancy,” says Clifford Nass, one professor who did the research. “Everything distracts them.” Merzenich offers an even bleaker assessment: As we multitask online, we are “training our brains to pay attention to the crap.”

There’s nothing wrong with absorbing information quickly and in bits and pieces. We’ve always skimmed newspapers more than we’ve read them, and we routinely run our eyes over books and magazines to get the gist of a piece of writing and decide whether it warrants more thorough reading. The ability to scan and browse is as important as the ability to read deeply and think attentively. The problem is that skimming is becoming our dominant mode of thought. Once a means to an end, a way to identify information for further study, it’s becoming an end in itself—our preferred method of both learning and analysis. Dazzled by the Net’s treasures, we are blind to the damage we may be doing to our intellectual lives and even our culture.

What we’re experiencing is, in a metaphorical sense, a reversal of the early trajectory of civilization: We are evolving from cultivators of personal knowledge into hunters and gatherers in the electronic data forest. In the process, we seem fated to sacrifice much of what makes our minds so interesting.

Adapted from The Shallows: What the Internet Is Doing to Our Brains, copyright©2010 Nicholas Carr to be published by W.W. Norton and Company in June. Nicholas Carr(ncarr@mac.com) is also the author of The Big Switch and Does IT Matter?

Wired Magazine