2010年12月29日 星期三

In Pursuit of a Mind Map, Slice by Slice

By ASHLEE VANCE

CAMBRIDGE, Mass — Dr. Jeff Lichtman likes his brains sliced thin — very, very thin.

C.J. Gunther for The New York Times

TEAM LEADER Dr. Jeff Lichtman, with a 3-D image of a section of mouse brain and a magnified section of a dendrite (red), in his office at Harvard.

Science Times Podcast
Subscribe

This week: A road map of the brain, dating for the disabled, and size matters for cockroaches.

Science Update

Related

C.J. Gunther for The New York Times

ILLUMINATING A section of a mouse brain about 30 nanometers thick, ready for an electron microscope at Harvard. Researchers liken the cutting to shaving off the surface of a football field at a thickness of one-hundredth of an inch.

Dr. Lichtman and his team of researchers at Harvard have built some unusual contraptions that carve off slivers of mouse brains as part of a quest to understand how the mind works. Their goal is to run slice after minuscule slice under a powerful electron microscope, develop detailed pictures of the brain’s complex wiring and then stitch the images back together. In short, they want to build a full map of the mind.

The field, at a very nascent stage, is called connectomics, and the neuroscientists pursuing it compare their work to early efforts in genetics. What they are doing, these scientists say, is akin to trying to crack the human genome — only this time around, they want to find how memories, personality traits and skills are stored.

They want to find a connectome, or the mental makeup of a person.

“You are born with your genes, and they don’t change afterward,” said H. Sebastian Seung, a professor of computational neuroscience at the Massachusetts Institute of Technology who is working on the computer side of connectomics. “The connectome is a product of your genes and your experiences. It’s where nature meets nurture.”

The task is arduous and years from fruition, and even the biggest zealots acknowledge that their work may not pay off. But connectomics has gotten some meaningful financing: In September, the National Institutes of Health handed out $40 million in grants to researchers at Harvard, Washington University in St. Louis, the University of Minnesota and the University of California, Los Angeles, to pursue connectomics. Together, their research efforts comprise the Human Connectome Project.

Since the 1970s, researchers have only had one connectome to play with — that of a worm with a measly 300 neurons. Now they are trying a mouse brain, with its 100 million neurons. So far the notion of creating a human-scale connectome — which would illuminate all of the connections among more than 100 billion neurons and unravel the millions of miles of wires in the brain — has proved too daunting.

The task at hand is somewhat similar to trying to untangle a bowl of spaghetti. Each individual spaghetti strand may touch tens of other strands as it weaves in a contorted fashion through the bowl. In this case, the researchers want to do the equivalent of seeing where all the strands connect at the atom level.

And because the brain’s wiring is so densely packed, building a connectome stands as one of the most formidable data collection efforts ever concocted. About one petabyte of computer memory will be needed to store the images needed to form a picture of a one-millimeter cube of mouse brain, the scientists say. By comparison, it takes Facebook about one petabyte of data storage space to hold 40 billion photos.

“The world is not yet ready for the million-petabyte data set the human brain would be,” Dr. Lichtman said. “But it will be.”

Neuroscientists say that a connectome could give them myriad insights about the brain’s function and prove particularly useful in the exploration of mental illness. For the first time, researchers and doctors might be able to determine how someone was wired — quite literally — and compare that picture with “regular” brains. Surgeons armed with a connectome might also be able to make more calculated cuts in the brain.

“The connectome project is going to show where all the white matter — all the connecting fibers — are,” said Stanley Klein, a professor of optometry and vision science at the University of California, Berkeley. “The whole goal in something like a surgery for epilepsy is to delicately slice out some of the white matter without removing any cortex.”

Dr. Klein says he has “zero question” that this type of surgery could benefit from developing a connectome.

Other scientists doubt that the results will match the effort. The comparisons to the genome prove haunting, and critics suggest that the connectome fans are wasting valuable research dollars and setting themselves up for a huge letdown.

“There are people that argue we still just don’t know enough about the brain to know where to look for insights,” said Bradley Voytek, a researcher at the Helen Wills Neuroscience Institute at the University of California, Berkeley. “They also contend that there is no possible way you can build a full connectome in any realistic time frame.”

What’s more, even if the researchers succeed, they will only produce a static picture of a brain frozen in time, rather than something that shows how a brain responds to different types of stimuli.

Scientists around the world, including Stephen J. Smith, a neuroscience professor at Stanford, and Gerald M. Rubin, a researcher with the Howard Hughes Medical Institute, have pushed past the naysayers and developed varying techniques for mapping the brains and nervous systems of human as well as other creatures.

“There are some people who say, ‘Maybe you don’t need this information, and given the expense of it, maybe you should put it off,’ ” said Dr. Lichtman, a professor of molecular and cellular biology at Harvard. “It’s a fair controversy.”

Harvard recruited Dr. Lichtman to push the connectome quest to its limits by tackling an entire mouse brain at the finest scale and allowed him to set up his own connectome research laboratory, staffed with four other people.

In the basement quarters that house Lichtman Lab, the researchers go to work anesthetizing mice, slicing open their rib cages and using the animals’ circulatory systems to spread concoctions that preserve the flesh and tune it for the electron microscope. Now and again, a researcher will reach into a box of mouse food pellets littered around the lab for sustenance during the tedious work.

“They’re not too bad,” said Bobby Kasthuri, one of the researchers.

With the body prepared, the slicing can begin.

Machines built by Kenneth J. Hayworth, another one of the researchers, can sheer off slices of a mouse brain just 29.4 nanometers thin using a diamond knife blade. To provide a sense of the accomplishment, the researchers liken the cutting to shaving off the entire surface of a football field at a thickness of one-hundredth of an inch.

Mr. Hayworth devised techniques for floating the brain slivers across a tiny puddle of water where surface tension carries them to a clear plastic tape. The tape backing adds some sturdiness to the slivers and makes it possible to place scores of them on a silicon wafer that then goes under the electron microscope.

At Lichtman Lab, the researchers are marching across a mouse brain in linear fashion, gathering the slices, imaging them and then putting the puzzle back together. Once assembled by a computer, the images of the brain are beautiful.

Dr. Lichtman and his colleagues give individual brain cells unique colors, making it easier to follow the wiring of a single neuron’s extensive axon and dendrite branches. The microscopes and computers they use can twist and turn these psychedelic images and zoom in and out at will.

It takes about three days for the researchers to carve 7,000 sections of a mouse’s cerebral cortex.

“The cutting is easy,” Dr. Lichtman said. “The big time sink is imaging.”

Dr. Lichtman estimates it will be several years before they can contemplate a connectome of a mouse brain, but there are some technological advances on the horizon that could cut that time significantly. Needless to say, a human brain would be far more complex and time-consuming.

“Hopefully, we are returning with a burst of new energy to the question of how the brain is wired up,” said Gary S. Lynch, a well-known brain researcher at the University of California, Irvine. “Lacking a blueprint, we’re never going to get anywhere on the most profound and fun questions that drew everyone to neuroscience in the first place: what is thought, consciousness?”

A connectome would provide a far more detailed look at the brain’s inner workings than current techniques that measure blood flow in certain regions. The researchers contend that it would literally show how people are wired and illuminate differences in the brains of people with mental illness.

As Mr. Kasthuri, the Harvard researcher, put it: “It will either be a great success story or a massive cautionary tale.”

2010年12月27日 星期一

Edible iPhone treats selling like hot cakes


BY ASAKO HANAFUSA STAFF WRITER

2010/12/27


photoApple Inc.'s iPhone 3GS, left, and an iPhone cookie sold at bakery Green Gables (Asako Hanafusa)photoKumiko Kudo making cookies (Asako Hanafusa)

TOKUSHIMA--Hand-made chocolate cookies in the shape of the iPhone have become the latest must-have accessory for the tech-savvy gourmet.

The biscuits, which are about 12 centimeters long and 6 cm wide, are made by Kumiko Kudo, the 44-year-old owner of bakery Green Gables in Aizumi town, Tokushima Prefecture, replicating Apple Inc.'s iconic device on a chocolate base, with icons nicely drawn in red, green and blue icing.

Kudo said the idea for the biscuits came from one of her customers, who asked her to make a look-a-like of the iPod touch media player for her husband's birthday gift in October 2008.

Kudo mistook the gadget for the very similar iPhone, which had just appeared on the market, but the customer was delighted by the end product.

News of Kudo's creation did not spread widely until a message on the Internet micro-blogging site Twitter in January by the well-known economic critic Kazuyo Katsuma.

A few days before a 42-year-old female company worker in Tokyo had seen the iPhone cookie on the blog of Kudo's bakery and had ordered two to give to Katsuma and singer Komi Hirose, who coauthored a book on Twitter.

Katsuma immediately posted a message on Twitter heralding the "amazing iPhone cookie." Hirose also posted a message about the "edible iPhone."

Their many followers read the messages and news of the cookie spread quickly. Orders began to pour in.

When Kudo was invited to an event held by Softbank Corp. in March, she handed President Masayoshi Son one of biscuits, who had earlier posted his own Twitter message saying: "I want one!" Son was overjoyed: "I'm so happy. I cannot possibly eat this," he said.

Kudo, who makes all her own cakes and biscuits, says she can create no more than 20 iPhone cookies a day. One biscuit is priced at 2,730 yen ($33), including tax.

Kudo has received requests for iPad cookies. She said she has experimented, but "it turned out to be too big, heavy and difficult to make."

Intel, AMD to Unveil Combination Chips

Intel, AMD to Unveil Combination Chips


Chip makers soon will deliver one of biggest advances in years in the technology that powers laptop and desktop computers. But how much consumers—and the chip companies—will benefit is in question.

Chip makers soon will deliver one of biggest advances in years in the technology that powers laptop and desktop computers. But how much it will benefit consumers is still to be determined. WSJ's Don Clark reports on Digits.

The design trend, expected to be the focus of announcements by Intel Corp. and Advanced Micro Devices Inc. at the Consumer Electronics Show early next month, is based on bringing together two long-separate classes of products: microprocessors, the calculating engines that run most PC software; and graphics processing units, which render images in videogames and other programs.

Putting the two technologies on one piece of silicon reduces the distance electrical signals must travel and speeds up some computing chores. It also lowers the number of components computer makers need to buy, cutting production costs and helping to shrink the size of computers. Such integrated chips are expected to allow low-priced systems to carry out tasks that currently add hundreds of dollars to the price of a personal computer, such as the ability to play high-definition movies and videogames and to convert video and audio files to different formats quickly.

The approach "is going to change the way people build PCs and buy PCs," Paul Otellini, Intel's chief executive, predicted at an investor conference early this month.

But the benefits won't be measurable until after the CES show, when computer makers are expected to disclose their plans for using the technology. And some industry executives insist that many PC users will continue to seek even better performance by picking systems with separate graphics-processing-unit chips.

Intel, which supplies roughly four-fifths of the microprocessors used in PCs, is using the event to introduce a broad overhaul of its flagship Core product line using a design that is code-named Sandy Bridge. The products add GPU circuitry that Intel has offered in companion chipsets, as well as video-processing features and other undisclosed features aimed at improving the visual experience of using PCs—technologies Intel plans to market as part of a campaign called Visibly Smart.

Mr. Otellini said demand is "very, very strong" for the chips, which are expected to be used in hundreds of new designs for laptop and desktop PCs at various price points. Intel also is expected to offer a new version of a technology known as Wi-Di, which allows laptop users to wirelessly display images on high-definition TV sets.

The trend is at least as important for AMD, perennial underdog to Intel in the microprocessor market. AMD spent $5.4 billion in 2006 to buy ATI Technologies, one of two big makers of GPUs, and vowed then to combine that technology with its microprocessors by early 2009 in an initiative it calls Fusion.

That effort took longer than the company anticipated. AMD is using the CES trade show to introduce microprocessors with GPU circuitry that are targeted at laptops in the $200 to $500 range. But it doesn't expect to offer high-end Fusion chips that could directly compete with Intel's overhauled Core line until the middle of next year.

AMD expects the chips being introduced at the CES show to add much better capabilities for playing games and high-definition videos to a low-end portable category known as netbooks, a market Intel has dominated. "We are bringing just this incredible amount of visual and computing power to segments where it hasn't been seen before," said Rick Bergman, an AMD senior vice president who is general manager of its products group.

The third player affected by the trend is Nvidia Corp. The Silicon Valley company competes fiercely with AMD in sales of GPUs, but agrees with its rival on one point: The graphics circuitry added in Sandy Bridge—though an improvement over Intel's past efforts—still isn't adequate for many applications.

Both companies cite that the new Intel chips don't support a Microsoft Corp. programming technology called DirectX 11, needed for some popular videogames, while their products do. An Intel spokesman responded, saying that many widely used games will work fine using Sandy Bridge, which the company predicts will make GPUs unnecessary in low-end PCs.

Nvidia says many PC makers don't seem to agree with Intel's assertion, with more than 200 forthcoming models based on Sandy Bridge also including its GPUs.

"We have more design wins in Sandy Bridge than any other platform," said Nvidia CEO Jen-Hsun Huang.

Mr. Huang says the new Intel chips with built-in graphics, instead of hurting Nvidia, will help the company by driving demand for PCs—largely because of other technology improvements. "I think this is the best microprocessor that's been built for quite a long time," he said.

Intel hasn't disclosed performance estimates for the new chips, which are expected to start with high-end models that have the equivalent of four calculating engines.

One person who has tested the technology is Kelt Reeves, president of the gaming-PC maker Falcon Northwest. While the graphics performance won't satisfy gamers, in Mr. Reeves's opinion, the four processors on Sandy Bridge chips top the performance of six processors on existing Intel products. The chips are "ridiculously good," he said.

Write to Don Clark at don.clark@wsj.com


2010年12月26日 星期日

New shrimp and blue poppies identified


By TOMOYUKI YAMAMOTO Staff Writer

2010/12/25


photoThis red-and-white shrimp was identified as a new species. (Yusuke Yamada)photoA new species of blue poppy (Toshio Yoshida)

A tiny red-and-white shrimp and two varieties of blue poppies that were found by Japanese researchers have been identified as new species.

The shrimp, about 1 centimeter long, were found in shallow waters off Okinawa's main island and in the Indian Ocean off Madagascar, nearly 10,000 kilometers away.

Tomoyuki Komai and other researchers at the Natural History Museum and Institute, Chiba, in Chiba, identified the creature as a member of the Alpheidae family.

Their research was reported in Zootaxa, an academic journal published in New Zealand.

"The shrimp appear to live over a wide area of the sea," said Komai. "Perhaps because they are so small, they had not been identified, despite their conspicuous red-and-white color, reminiscent of Santa Claus."

The blue poppies were found growing on a mountain in the southwestern part of China's Sichuan province by plant photographer Toshio Yoshida in August 2009.

Yoshida, 61, spotted the flowers blooming among rocks on a mountain more than 4,000 meters high. The Chinese call the family of blue poppies "phantom flowers."

They were identified as two new species--M. heterandra and M. pulchella--through joint research by Yoshida and scholars at Harvard University in the United States and at the Kunming Institute of Botany at the Chinese Academy of Sciences.

More than 40 species of blue poppies are known around the world.

2010年12月25日 星期六

A Scientist, His Work and a Climate Reckoning

Temperature Rising

A Scientist, His Work and a Climate Reckoning

Jonathan Kingston/Aurora Select, for The New York Times

KEEPING WATCH The Mauna Loa Observatory, at an altitude of 11,135 feet above sea level in Hawaii, has been continuously monitoring and collecting data related to climate change since the 1950s. More Photos »

MAUNA LOA OBSERVATORY, Hawaii — Two gray machines sit inside a pair of utilitarian buildings here, sniffing the fresh breezes that blow across thousands of miles of ocean.

Temperature Rising

Tracking the Numbers

Articles in this series are focusing on the central arguments in the climate debate and examining the evidence for global warming and its consequences.

Green

A blog about energy and the environment.

Scripps Institution of Oceanography; U.C. San Diego

THE KEELINGS Charles D. Keeling with his son Ralph in 1989. More Photos »

Readers' Comments

Readers shared their thoughts on this article.

They make no noise. But once an hour, they spit out a number, and for decades, it has been rising relentlessly.

The first machine of this type was installed on Mauna Loa in the 1950s at the behest of Charles David Keeling, a scientist from San Diego. His resulting discovery, of the increasing level of carbon dioxide in the atmosphere, transformed the scientific understanding of humanity’s relationship with the earth. A graph of his findings is inscribed on a wall in Washington as one of the great achievements of modern science.

Yet, five years after Dr. Keeling’s death, his discovery is a focus not of celebration but of conflict. It has become the touchstone of a worldwide political debate over global warming.

When Dr. Keeling, as a young researcher, became the first person in the world to develop an accurate technique for measuring carbon dioxide in the air, the amount he discovered was 310 parts per million. That means every million pints of air, for example, contained 310 pints of carbon dioxide.

By 2005, the year he died, the number had risen to 380 parts per million. Sometime in the next few years it is expected to pass 400. Without stronger action to limit emissions, the number could pass 560 before the end of the century, double what it was before the Industrial Revolution.

The greatest question in climate science is: What will that do to the temperature of the earth?

Scientists have long known that carbon dioxide traps heat at the surface of the planet. They cite growing evidence that the inexorable rise of the gas is altering the climate in ways that threaten human welfare.

Fossil fuel emissions, they say, are like a runaway train, hurtling the world’s citizens toward a stone wall — a carbon dioxide level that, over time, will cause profound changes.

The risks include melting ice sheets, rising seas, more droughts and heat waves, more flash floods, worse storms, extinction of many plants and animals, depletion of sea life and — perhaps most important — difficulty in producing an adequate supply of food. Many of these changes are taking place at a modest level already, the scientists say, but are expected to intensify.

Reacting to such warnings, President George Bush committed the United States in 1992 to limiting its emissions of greenhouse gases, especially carbon dioxide. Scores of other nations made the same pledge, in a treaty that was long on promises and short on specifics.

But in 1998, when it came time to commit to details in a document known as the Kyoto Protocol, Congress balked. Many countries did ratify the protocol, but it had only a limited effect, and the past decade has seen little additional progress in controlling emissions.

Many countries are reluctant to commit themselves to tough emission limits, fearing that doing so will hurt economic growth. International climate talks in Cancún, Mexico, this month ended with only modest progress. The Obama administration, which came into office pledging to limit emissions in the United States, scaled back its ambitions after climate and energy legislation died in the Senate this year.

Challengers have mounted a vigorous assault on the science of climate change. Polls indicate that the public has grown more doubtful about that science. Some of the Republicans who will take control of the House of Representatives in January have promised to subject climate researchers to a season of new scrutiny.

One of them is Representative Dana Rohrabacher, Republican of California. In a recent Congressional hearing on global warming, he said, “The CO2 levels in the atmosphere are rather undramatic.”

But most scientists trained in the physics of the atmosphere have a different reaction to the increase.

“I find it shocking,” said Pieter P. Tans, who runs the government monitoring program of which the Mauna Loa Observatory is a part. “We really are in a predicament here, and it’s getting worse every year.”

As the political debate drags on, the mute gray boxes atop Mauna Loa keep spitting out their numbers, providing a reality check: not only is the carbon dioxide level rising relentlessly, but the pace of that rise is accelerating over time.

“Nature doesn’t care how hard we tried,” Jeffrey D. Sachs, the Columbia University economist, said at a recent seminar. “Nature cares how high the parts per million mount. This is running away.”

A Passion for Precision

Perhaps the biggest reason the world learned of the risk of global warming was the unusual personality of a single American scientist.

Charles David Keeling’s son Ralph remembers that when he was a child, his family bought a new home in Del Mar, Calif., north of San Diego. His father assigned him the task of edging the lawn. Dr. Keeling insisted that Ralph copy the habits of the previous owner, an Englishman who had taken pride in his garden, cutting a precise two-inch strip between the sidewalk and the grass.

“It took a lot of work to maintain this attractive gap,” Ralph Keeling recalled, but he said his father believed “that was just the right way to do it, and if you didn’t do that, you were cutting corners. It was a moral breach.”

Dr. Keeling was a punctilious man. It was by no means his defining trait — relatives and colleagues described a man who played a brilliant piano, loved hiking mountains and might settle a friendly argument at dinner by pulling an etymological dictionary off the shelf.

But the essence of his scientific legacy was his passion for doing things in a meticulous way. It explains why, even as challengers try to pick apart every other aspect of climate science, his half-century record of carbon dioxide measurements stands unchallenged.

By the 1950s, when Dr. Keeling was completing his scientific training, scientists had been observing the increasing use of fossil fuels and wondering whether carbon dioxide in the air was rising as a result. But nobody had been able to take accurate measurements of the gas.

As a young researcher, Dr. Keeling built instruments and developed techniques that allowed him to achieve great precision in making such measurements. Then he spent the rest of his life applying his approach.

In his earliest measurements of the air, taken in California and other parts of the West in the mid-1950s, he found that the background level for carbon dioxide was about 310 parts per million.

That discovery drew attention in Washington, and Dr. Keeling soon found himself enjoying government backing for his research. He joined the staff of the Scripps Institution of Oceanography, in the La Jolla section of San Diego, under the guidance of an esteemed scientist named Roger Revelle, and began laying plans to measure carbon dioxide around the world.

Some of the most important data came from an analyzer he placed in a government geophysical observatory that had been set up a few years earlier in a remote location: near the top of Mauna Loa, one of the volcanoes that loom over the Big Island of Hawaii.

He quickly made profound discoveries. One was that carbon dioxide oscillated slightly according to the seasons. Dr. Keeling realized the reason: most of the world’s land is in the Northern Hemisphere, and plants there were taking up carbon dioxide as they sprouted leaves and grew over the summer, then shedding it as the leaves died and decayed in the winter.

He had discovered that the earth itself was breathing.

A more ominous finding was that each year, the peak level was a little higher than the year before. Carbon dioxide was indeed rising, and quickly. That finding electrified the small community of scientists who understood its implications. Later chemical tests, by Dr. Keeling and others, proved that the increase was due to the combustion of fossil fuels.

The graph showing rising carbon dioxide levels came to be known as the Keeling Curve. Many Americans have never heard of it, but to climatologists, it is the most recognizable emblem of their science, engraved in bronze on a building at Mauna Loa and carved into a wall at the National Academy of Sciences in Washington.

By the late 1960s, a decade after Dr. Keeling began his measurements, the trend of rising carbon dioxide was undeniable, and scientists began to warn of the potential for a big increase in the temperature of the earth.

Dr. Keeling’s mentor, Dr. Revelle, moved to Harvard, where he lectured about the problem. Among the students in the 1960s who first saw the Keeling Curve displayed in Dr. Revelle’s classroom was a senator’s son from Tennessee named Albert Arnold Gore Jr., who marveled at what it could mean for the future of the planet.

Throughout much of his career, Dr. Keeling was cautious about interpreting his own measurements. He left that to other people while he concentrated on creating a record that would withstand scrutiny.

John Chin, a retired technician in Hawaii who worked closely with Dr. Keeling, recently described the painstaking steps he took, at Dr. Keeling’s behest, to ensure accuracy. Many hours were required every week just to be certain that the instruments atop Mauna Loa had not drifted out of kilter.

The golden rule was “no hanky-panky,” Mr. Chin recalled in an interview in Hilo, Hawaii. Dr. Keeling and his aides scrutinized the records closely, and if workers in Hawaii fell down on the job, Mr. Chin said, they were likely to get a call or letter: “What did you do? What happened that day?”

In later years, as the scientific evidence about climate change grew, Dr. Keeling’s interpretations became bolder, and he began to issue warnings. In an essay in 1998, he replied to claims that global warming was a myth, declaring that the real myth was that “natural resources and the ability of the earth’s habitable regions to absorb the impacts of human activities are limitless.”

Still, by the time he died, global warming had not become a major political issue. That changed in 2006, when Mr. Gore’s movie and book, both titled “An Inconvenient Truth,” brought the issue to wider public attention. The Keeling Curve was featured in both.

In 2007, a body appointed by the United Nations declared that the scientific evidence that the earth was warming had become unequivocal, and it added that humans were almost certainly the main cause. Mr. Gore and the panel jointly won the Nobel Peace Prize.

But as action began to seem more likely, the political debate intensified, with fossil-fuel industries mobilizing to fight emission-curbing measures. Climate-change contrarians increased their attack on the science, taking advantage of the Internet to distribute their views outside the usual scientific channels.

In an interview in La Jolla, Dr. Keeling’s widow, Louise, said that if her husband had lived to see the hardening of the political battle lines over climate change, he would have been dismayed.

“He was a registered Republican,” she said. “He just didn’t think of it as a political issue at all.”

The Numbers

Not long ago, standing on a black volcanic plain two miles above the Pacific Ocean, the director of the Mauna Loa Observatory, John E. Barnes, pointed toward a high metal tower.

Samples are taken by hoses that snake to the top of the tower to ensure that only clean air is analyzed, he explained. He described other measures intended to guarantee an accurate record. Then Dr. Barnes, who works for the National Oceanic and Atmospheric Administration, displayed the hourly calculation from one of the analyzers.

It showed the amount of carbon dioxide that morning as 388 parts per million.

After Dr. Keeling had established the importance of carbon dioxide measurements, the government began making its own, in the early 1970s. Today, a NOAA monitoring program and the Scripps Institution of Oceanography program operate in parallel at Mauna Loa and other sites, with each record of measurements serving as a quality check on the other.

The Scripps program is now run by Ralph Keeling, who grew up to become a renowned atmospheric scientist in his own right and then joined the Scripps faculty. He took control of the measurement program after his father’s sudden death from a heart attack.

In an interview on the Scripps campus in La Jolla, Ralph Keeling calculated that the carbon dioxide level at Mauna Loa was likely to surpass 400 by May 2014, a sort of odometer moment in mankind’s alteration of the atmosphere.

“We’re going to race through 400 like we didn’t see it go by,” Dr. Keeling said.

What do these numbers mean?

The basic physics of the atmosphere, worked out more than a century ago, show that carbon dioxide plays a powerful role in maintaining the earth’s climate. Even though the amount in the air is tiny, the gas is so potent at trapping the sun’s heat that it effectively works as a one-way blanket, letting visible light in but stopping much of the resulting heat from escaping back to space.

Without any of the gas, the earth would most likely be a frozen wasteland — according to a recent study, its average temperature would be colder by roughly 60 degrees Fahrenheit. But scientists say humanity is now polluting the atmosphere with too much of a good thing.

In recent years, researchers have been able to put the Keeling measurements into a broader context. Bubbles of ancient air trapped by glaciers and ice sheets have been tested, and they show that over the past 800,000 years, the amount of carbon dioxide in the air oscillated between roughly 200 and 300 parts per million. Just before the Industrial Revolution, the level was about 280 parts per million and had been there for several thousand years.

That amount of the gas, in other words, produced the equable climate in which human civilization flourished.

Other studies, covering many millions of years, show a close association between carbon dioxide and the temperature of the earth. The gas seemingly played a major role in amplifying the effects of the ice ages, which were caused by wobbles in the earth’s orbit.

The geologic record suggests that as the earth began cooling, the amount of carbon dioxide fell, probably because much of it got locked up in the ocean, and that fall amplified the initial cooling. Conversely, when the orbital wobble caused the earth to begin warming, a great deal of carbon dioxide escaped from the ocean, amplifying the warming.

Richard B. Alley, a climate scientist at Pennsylvania State University, refers to carbon dioxide as the master control knob of the earth’s climate. He said that because the wobbles in the earth’s orbit were not, by themselves, big enough to cause the large changes of the ice ages, the situation made sense only when the amplification from carbon dioxide was factored in.

“What the ice ages tell us is that our physical understanding of CO2 explains what happened and nothing else does,” Dr. Alley said. “The ice ages are a very strong test of whether we’ve got it right.”

When people began burning substantial amounts of coal and oil in the 19th century, the carbon dioxide level began to rise. It is now about 40 percent higher than before the Industrial Revolution, and humans have put half the extra gas into the air since just the late 1970s. Emissions are rising so rapidly that some experts fear that the amount of the gas could double or triple before emissions are brought under control.

The earth’s history offers no exact parallel to the human combustion of fossil fuels, so scientists have struggled to calculate the effect.

Their best estimate is that if the amount of carbon dioxide doubles, the temperature of the earth will rise about five or six degrees Fahrenheit. While that may sound small given the daily and seasonal variations in the weather, the number represents an annual global average, and therefore an immense addition of heat to the planet.

The warming would be higher over land, and it would be greatly amplified at the poles, where a considerable amount of ice might melt, raising sea levels. The deep ocean would also absorb a tremendous amount of heat.

Moreover, scientists say that an increase of five or six degrees is a mildly optimistic outlook. They cannot rule out an increase as high as 18 degrees Fahrenheit, which would transform the planet.

Climate-change contrarians do not accept these numbers.

The Internet has given rise to a vocal cadre of challengers who question every aspect of the science — even the physics, worked out in the 19th century, that shows that carbon dioxide traps heat. That is a point so elementary and well-established that demonstrations of it are routinely carried out by high school students.

However, the contrarians who have most influenced Congress are a handful of men trained in atmospheric physics. They generally accept the rising carbon dioxide numbers, they recognize that the increase is caused by human activity, and they acknowledge that the earth is warming in response.

But they doubt that it will warm nearly as much as mainstream scientists say, arguing that the increase is likely to be less than two degrees Fahrenheit, a change they characterize as manageable.

Among the most prominent of these contrarians is Richard Lindzen of the Massachusetts Institute of Technology, who contends that as the earth initially warms, cloud patterns will shift in a way that should help to limit the heat buildup. Most climate scientists contend that little evidence supports this view, but Dr. Lindzen is regularly consulted on Capitol Hill.

“I am quite willing to state,” Dr. Lindzen said in a speech this year, “that unprecedented climate catastrophes are not on the horizon, though in several thousand years we may return to an ice age.”

The Fuel of Civilization

While the world’s governments have largely accepted the science of climate change, their efforts to bring emissions under control are lagging.

The simple reason is that modern civilization is built on burning fossil fuels. Cars, trucks, power plants, steel mills, farms, planes, cement factories, home furnaces — virtually all of them spew carbon dioxide or lesser heat-trapping gases into the atmosphere.

Developed countries, especially the United States, are largely responsible for the buildup that has taken place since the Industrial Revolution. They have begun to make some headway on the problem, reducing the energy they use to produce a given amount of economic output, with some countries even managing to lower their total emissions.

But these modest efforts are being swamped by rising energy use in developing countries like China, India and Brazil. In those lands, economic growth is not simply desirable — it is a moral imperative, to lift more than a third of the human race out of poverty. A recent scientific paper referred to China’s surge as “the biggest transformation of human well-being the earth has ever seen.”

China’s citizens, on average, still use less than a third of the energy per person as Americans. But with 1.3 billion people, four times as many as the United States, China is so large and is growing so quickly that it has surpassed the United States to become the world’s largest overall user of energy.

Barring some big breakthrough in clean-energy technology, this rapid growth in developing countries threatens to make the emissions problem unsolvable.

Emissions dropped sharply in Western nations in 2009, during the recession that followed the financial crisis, but that decrease was largely offset by continued growth in the East. And for 2010, global emissions are projected to return to the rapid growth of the past decade, rising more than 3 percent a year.

Many countries have, in principle, embraced the idea of trying to limit global warming to two degrees Celsius, or 3.6 degrees Fahrenheit, feeling that any greater warming would pose unacceptable risks. As best scientists can calculate, that means about one trillion tons of carbon can be burned and the gases released into the atmosphere before emissions need to fall to nearly zero.

“It took 250 years to burn the first half-trillion tons,” Myles R. Allen, a leading British climate scientist, said in a briefing. “On current trends, we’ll burn the next half-trillion in less than 40.”

Unless more serious efforts to convert to a new energy system begin soon, scientists argue, it will be impossible to hit the 3.6-degree target, and the risk will increase that global warming could spiral out of control by century’s end.

“We are quickly running out of time,” said Josep G. Canadell, an Australian scientist who tracks emissions

In many countries, the United States and China among them, a conversion of the energy system has begun, with wind turbines and solar panels sprouting across the landscape. But they generate only a tiny fraction of all power, with much of the world’s electricity still coming from the combustion of coal, the dirtiest fossil fuel.

With the exception of European countries, few nations have been willing to raise the cost of fossil fuels or set emissions caps as a way to speed the transformation. In the United States, a particular fear has been that a carbon policy will hurt the country’s industries as they compete with companies abroad whose governments have adopted no such policy.

As he watches these difficulties, Ralph Keeling contemplates the unbending math of carbon dioxide emissions first documented by his father more than a half-century ago and wonders about the future effects of that increase.

“When I go see things with my children, I let them know they might not be around when they’re older,” he said. “ ‘Go enjoy these beautiful forests before they disappear. Go enjoy the glaciers in these parks because they won’t be around.’ It’s basically taking note of what we have, and appreciating it, and saying goodbye to it.”

On Dec. 11, another round of international climate negotiations, sponsored by the United Nations, concluded in Cancún. As they have for 18 years running, the gathered nations pledged renewed efforts. But they failed to agree on any binding emission targets.

Late at night, as the delegates were wrapping up in Mexico, the machines atop the volcano in the middle of the Pacific Ocean issued their own silent verdict on the world’s efforts.

At midnight Mauna Loa time, the carbon dioxide level hit 390 — and rising.

German researchers develop ice-free windshield

German researchers develop ice-free windshield

It's what most European drivers are dreaming of these days: a farewell to
ice-scraping. But drivers need some patience: the new technology is not
ready for the market yet, the German researchers say.

The DW-WORLD.DE Article
http://newsletter.dw-world.de/re?l=ew78zbI44va89pI1

2010年12月24日 星期五

【iPhone4拆解】到處都是螺絲的設計令人驚訝

許久以前很重視這些資訊
2010/7/1

【iPhone4拆解3】到處都是螺絲的設計令人驚訝


上接本站報導:

【iPhone4拆解1】首先設法買到iPhone4

【iPhone4拆解2】內部果然一片黑

拉拽電池左側的標籤
(點擊放大)
使用了大量螺絲
(點擊放大)
拆解後的iPhone4
(點擊放大)
主板上排列的連接器
(點擊放大)
  接下來拆解小組從黑色部件中選定目標,開始一個一個地拆卸零部件。首先拆卸的是最大的部件——鋰聚合物充電電池。iPhone3G和iPad的電池是通過黏合劑等固定的,拆卸時頗費了一番功夫(參閱本站報導)。

  不過,iPhone4則可輕鬆地將電池拆下來。一拉寫有「Authorized Service Provider Only」的標籤,被墊在電池下面的薄膜就被拉起來,電池也隨之翹起。觀察拆下電池後的外殼發現,只有電池的外側一週是通過雙面膠固定的。

  iPhone4與此前美國蘋果公司產品的不同之處不只是電池的固定方式。進行拆解的技術人員不停地嘀咕道:

  「不像iPhone的慣有風格。」

  「完全不同於此前的蘋果產品。」

  甚至有技術人員流露出了「是不是換了設計人員」的疑問。

  最大的不同是使用了大量的螺絲。如果不先行拆下若干螺絲,就無法將部件拆下來。隨著拆解的進行,桌子上的螺絲不斷增多。

  為什麼要大量使用螺絲呢?拆解小組提出了原因在於部件價格上漲的見解。拆解小組推測大量使用螺絲的目的是為了在製造程序中如果組裝失敗時或者修理時,可以盡可能地回收部件,而不至於全部廢棄。

  此前,蘋果公司一直採用不以部件更換為前提的設計,例如通過雙面膠和黏合劑來固定電池等。因此,修理和更換電池時,只好整個更換手機。而在 iPhone4中,卻可以輕鬆地將電池和部件拆下來。是為了通過部件的回收利用來降低成本?還是為了突出減少了對環境的負荷?雖然蘋果公司的真正意圖尚不 清楚,但從此次的iPhone4來看可以肯定的是蘋果出現了一些變化。

  另一個不同是部件的模組化取得了進展。

  技術人員的感覺是「模組化程度超過了日本廠商」。

  柔性底板從各個模組中伸出來與連接器相連。這個連接器又進一步連接到主板的連接器上。

  主板上並排了許多連接器,技術人員驚訝地表示「從沒見過這麼多連接器排列在一起的」。

  接下來將探尋天線的秘密。(未完待續,《日經電子》拆解小組)

■日文原文
【iPhone4分解その3】ネジだらけの設計に驚き

■相關報導
【iPhone4拆解1】首先設法買到iPhone4

【iPhone4拆解2】內部果然黑一片

「iPhone 4」將於6月24日在日上市,軟銀移動獨家銷售

拆解iPhone3G發現的奧秘

【記者部落格】iPhone 4的A4和iPad的A4

「iPhone 4」將於6月24日在日上市,軟銀移動獨家銷售

Scientists produce 'world's smallest Christmas card'

smallest Christmas card graphic Glasgow University provided this graphic to demonstrate how small the card is

Scientists have produced what they believe is the world's smallest Christmas card.

It was created by nantotechnologists at the University of Glasgow and is so small it could fit on to the surface of a postage stamp 8,276 times.

The image, which measures 200x290 micro-metres, features a Christmas tree and is etched on a tiny piece of glass.

The team behind the project said the technology could eventually be used in products such as TVs and cameras.

The university's school of engineering drew up the design to highlight its "world-leading" nanotechnology expertise.

Prof David Cumming said: "Our nanotechnology is among the best in the world but sometimes explaining to the public what the technology is capable of can be a bit tricky.

"We decided that producing this Christmas card was a simple way to show just how accurate our technology is.

"The process to manufacture the card only took 30 minutes. It was very straightforward to produce as the process is highly repeatable - the design of the card took far longer than the production.

Human hair

Prof Cumming added: "The card is 200 micro-metres wide by 290 micro-metres tall.

"To put that into some sort of perspective, a micro-metre is a millionth of a metre; the width of a human hair is about 100 micro-metres.

"You could fit over half a million of them on to a standard A5 Christmas card - but signing them would prove to be a bit of a challenge."

The colours were produced by a process known as plasmon resonance in a patterned aluminium film made in the university's James Watt Nanofabrication Centre.

Although the Christmas card example is a simple demonstration, the university said the underlying technology had important real-world applications.

The electronics industry is taking advantage of micro and nano-fabrication technology by using it in bio-technology sensing, optical filtering and light control components.

The applications are critical in the future development of the digital economy and could eventually find their way into cameras, television and computer screens to reduce the manufacturing cost.


世界最小耶誕卡 兩根頭髮寬

〔編譯張沛元/綜合報導〕英國媒體報導,英國格拉斯哥大學的奈米科技科學家,已成功製作出一張據信是全世界最小耶誕卡—小到用肉眼無法看見,以及得用上八千兩百七十六張迷你耶誕卡,才能填滿一張普通郵票的表面。

鏤 刻在一小塊玻璃上的這張全球最小耶誕卡片,上面圖案是一棵耶誕樹。格拉斯哥大學的科學家之所以要設計這張耶誕卡,是要向該校獨步全球的奈米技術致敬。該校 教授康明說,他們的奈米科技在全世界數一數二,但有時要向大眾解釋該科技並非易事,「我們決定製造這張耶誕卡,以顯示我們的技術有多精準。」

康 明說,製作這張迷你耶誕卡只花了三十分鐘,反倒是設計卡片圖案花了比較長的時間;這張卡片寬兩百微米、高兩百九十微米,而一微米等於一公尺的一百萬分之 一。至於人類一根頭髮的寬度,大約是一百微米;若要填滿一張標準A5大小的耶誕卡,得用上超過五十萬張迷你耶誕卡,但要在卡片上簽名落款就麻煩了。

運用奈米科技來製作迷你耶誕卡不過是牛刀小試,但其中涉及的科技卻能應用於真實世界。電子業已利用微米與奈米製造技術於生物科技感應、光濾以及光控零件。

2010年12月14日 星期二

Taiwan scientists claim microchip 'breakthrough'

Taiwan flora show features high-tech displays
The Associated Press
TAIPEI, Taiwan (AP) — Paper-thin speakers blare pop music. Three-D films appear on elongated screens with no need for special viewing glasses. ...

----

Taiwan scientists claim microchip 'breakthrough'

TAIPEI — Taiwanese scientists on Tuesday unveiled an advanced microchip technology which they claimed marks a breakthrough in piling ever more memory into ever smaller spaces.

The scientists said they had succeeded in producing a circuit measuring just nine nanometres across -- one nanometre is equal to one billionth of a metre.

"Researchers used to believe that 20 nanometres was the limit for microchip technologies," said Ho Chia-hua, who heads the team behind the project at the state-run National Nano Device Laboratories.

A chip using the new memory technology has about 20 times the storage capacity of memory units now available on the market and consumes just one 200th of the electricity, the scientists said.

The benefits of greater memory and reduced electricity consumption are highly sought in the manufacture of electronic gadgets like smart phones and tablet computers.

Using such technology, a chip the size of one square centimetre will be capable of storing one million pictures or 100 hours of 3D movies, said Yang Fu-liang, the director general of the Laboratories.

However, Nobunaga Chai, an analyst of the Taipei-based electronics market research unit Digitimes, said it would be some time before anyone could start making money on the technology.

"I'm afraid it will take several years before the advanced technology can be turned into commercial use," he told AFP.

Taiwan is among the top four microchip producers in the world.

2010年12月9日 星期四

A Golden Age in Science, Full of Light and Shadow

Exhibition Review

A Golden Age in Science, Full of Light and Shadow

Chang W. Lee/The New York Times

Visitors, from age 4 up, at the exhibition “1001 Inventions.” More Photos »


“Take a look,” Ben Kingsley says, dropping an ancient tome before three British students as if he were teaching the Dark Arts at Hogwarts. “Take a look,” he tells them, “if you dare.”

Blog

ArtsBeat

The latest on the arts, coverage of live events, critical reviews, multimedia extravaganzas and much more. Join the discussion.

The book magically opens, releasing a cyclone of glittering ghosts. And Mr. Kingsley — who here portrays a librarian trying to get bored students interested in what their teacher calls the “Dark Ages” — is transformed into the turbaned al-Jazari: 12th-century inventor, mechanical engineer, visionary. “Welcome to the Dark Ages!” he declares, “or as it should be known, the Golden Ages!”

After he takes the students “from darkness into light” in this introductory film, we are off and running through “1001 Inventions,” at the New York Hall of Science in Queens. The exhibition’s name invokes the Eastern exoticism of Scheherazade, but the show is in earnest about its claims.

There aren’t 1,001 inventions on display, but those that are, along with the ideas described, are meant to show that the Western Dark Ages really were a Golden Age of Islam: a thousand years, in the show’s reckoning, that lasted into the 17th century. During that era, the exhibition asserts, Muslim scientists and inventors, living in empires reaching from Spain to China, anticipated the innovations of the modern world.

There are serious problems with this exhibition, but this has had no effect on its international acclaim. Conceived by a mechanical engineer, Salim T. S. al-Hassani, it began on a smaller scale touring British cities. It expanded into its current form at the London Science Museum this year, attracting 400,000 visitors, according to the show’s Web site. And its lavish companion book, “Muslim Heritage in Our World,” has won plaudits.

Kiosks are arranged here in an 8,000-square-foot space, their explanations, interactive displays, and videos examining seven “zones”: Home, School, Market, Hospital, Town, World, Universe. The show is also family friendly. A 20-foot-high reproduction of al-Jazari’s mechanical water clock welcomes visitors, its base an elephant and its crown a phoenix; unfortunately it is not really a replica — it operates without the water mechanism — but its playful monumentalism intrigues. And while some interactive exhibits are stilted, an astronomy display lets you reach toward a screen of the night sky like a deity, your gestures gliding a constellation into its proper place.

Throughout, the exhibition pays tribute to an important scientific tradition not commonly familiar, stocked with extraordinary technological creativity and scholarly enterprise. From 10th-century Spain we read of al-Zahrawi, author of an encyclopedic treatise on surgery. From 10th-century Baghdad we find al-Haytham, whose explorations of optics helped lay the foundations for Newton’s discoveries. We learn of advances in medical care, mathematics, astronomy and architecture.

As it turns out, though, the account requires extensive qualification. Had we learned more about scientific principles, had we been given sober assessments of, say, how 10th-century science developed, had a scholarly perspective been more evident — had we, in other words, been ushered into this world in a way once expected from science museums — the show could have been far more powerful.

Instead, it is as manipulative as it is illuminating. “1001 Inventions,” we are told in the literature, “is a nonreligious and non-political project.” But it actually is a little bit religious and considerably political.

It is less a typical science exhibition than a typical “identity” exhibition. It was created by the Foundation for Science, Technology and Civilization in London, whose goal is “to popularize, spread and promote an accurate account of Muslim Heritage and its contribution.” The show also tries to “instill confidence” and provide positive “role models” for young Muslims, as Mr. Hassani puts it in the book. And it is part of a “global educational initiative” that includes extensive classroom materials.

The promotional goal is evident in every display. The repeated suggestion is that Muslim scientists made discoveries later attributed to Westerners and that many Western institutions were shaped by Muslim contributions.

The exhibition, though, wildly overdoes it. First, it creates a straw man, reviving the notion, now defunct, of the Dark Ages. Then it overstates the neglect of Muslim science, which has, to the contrary, long been cited in Western scholarship. It also expands the Golden Age of Islam to a millennium, though the bright years were once associated with just portions of the Abbasid Caliphate, which itself lasted for about 500 years, from the eighth century to 1258. The show’s inflated ambitions make it difficult to separate error from exaggeration, and implication from fact.

Consider one label: “Setting the Story Straight.” We read: “For many centuries, English medic William Harvey took the prize as the first person to work out how our blood circulates.” But “what nobody knew” was that the “heart and lungs’ role in blood flow” was figured out by Ibn al-Nafis, the 13th-century physician. And yes, al-Nafis’s impressive work on pulmonary circulation apparently fell into oblivion until 1924. But Harvey’s 17th-century work was more complete; it was a theory of the entire circulatory system. So while neglect is clear, differences should be as well.

But the exhibition even seems to expand its claim. Historians, the label continues, have recently found evidence that Ibn al-Nafis’s Arabic text “may have been translated into Latin, paving the way to suppose that it might have indirectly influenced” Harvey’s work. The “may have,” the “suppose,” the “might have” and the “indirectly” reflect an overwhelming impulse to affirm what cannot be proved.

Sometimes Muslim precedence is suggested with even vaguer assertions. We read that Ibn Sina, in the 11th century, speculated about geological formations, “ideas that were developed, perhaps independently, by geologist James Hutton in the 18th century.” Why “perhaps independently”? Is there any evidence of influence? Are the analyses comparable? How? Nothing is clear other than a vague sense of wrongful neglect.

Some assertions go well beyond the evidence. Hovering above the show is a glider grasped by a ninth-century inventor from Cordoba, Abbas ibn Firnas, “the first person to have actually tried” to fly. But that notion is based on a source that relied on ibn Firnas’s mention in a ninth-century poem. It also ignores the historian Joseph Needham’s description of Chinese attempts as early as the first century. The model of the flying machine is pure speculation.

And some claims are simply incorrect: catgut was used in surgical sutures by Galen in the second century, long before al-Zahrawi (named here as its pioneer).

The exhibition also dutifully praises the multicultural aspect of this Golden Age while actually undercutting it. Major cultures of the first millennium (China, India, Byzantium) are mentioned only to affirm the weightier significance of Muslim contributions. And though we read that people “of many faiths worked together” in the Golden Age, we don’t learn much about them.

Religious affiliation actually seems far more important here than is acknowledged, keeping some figures out and ushering others in. Christian Arab contributions go unheralded, but the 15th-century Chinese explorer Zheng He, a Muslim, is celebrated though he has no deep connection to Golden Age cultures.

And finally we never learn much about the role of Islam itself. Universities, we read, were affiliated with mosques. Did that affect scientific inquiry or the status of non-Muslim scientists? Did the religious regime have any impact on the ultimate failure of the transmission and expansion of scientific knowledge? And given the high cost of any golden age, isn’t it necessary to give some account of this civilization’s extensive slave trade?

Instead of expanding the perspective, the exhibition reduces it to caricature, showing Muslim culture rising out of a shadowy past to attain glories later misappropriated by Western epigones. Left unexplored too is how this tradition ended, leading to a long eclipse of science in Muslim lands. There is only a recurring hint of injustices done.

The paradox is that this narrative is not only questionable but also unnecessary. An exhibition about scientific achievements during the Abbasid Caliphate could be remarkable if approached with curatorial perspective. Why then, the indulgence here?

Perhaps because one tendency in the West, particularly after 9/11, has been to answer Muslim accusations of injustice (and even real attacks) with an exaggerated declaration of regard. It is guiltily offered as if in embarrassed compensation, inspired by a desire not to appear to tar Islam with the fervent claims made by its most violent adherents.

Science museums have shared that impulse. An Imax film at the Boston Museum of Science is almost a commercial travelogue about science’s future in Saudi Arabia; and the Liberty Science Center in New Jersey has presented a traveling exhibition about Muslim inventions, that, like this one, mixed fascinating information with promotional overstatement.

What is peculiar too is that the current Hall of Science show presumes a long neglect of Muslim innovations, but try finding anything comparable about Western discoveries for American students. Where is a systematic historical survey of the West’s great ideas and inventions in contemporary science museums, many of which now seem to have very different preoccupations?

In the meantime, in the interest of mutual understanding, some such show about Western science might perhaps be mounted in Riyadh or Tehran, just as this one was in London. Wouldn’t that be a tale worthy of Scheherazade? It might begin: “Take a look, if you dare.”

“1001 Inventions” is on view through April 24 at the New York Hall of Science, Flushing Meadows-Corona Park, Queens; nysci.org.

The Future, Touchable and in Color

State of the Art

The Future, Touchable and in Color


E-book readers like the Amazon Kindle may be all the rage this holiday season. But five years from now, they’ll seem as laughably primitive as the Commodore 64.

Stuart Goldenberg

Pogue's Posts

The latest in technology from the Times’s David Pogue, with a new look.

The Nook Color is the first big-name e-book reader with a color touch screen.

Readers' Comments

Readers shared their thoughts on this article.

“Oh, man, remember those Cro-Magnon e-book readers?” we’ll say. “They used E Ink screens — black text on gray. No color. No touch screens. And every time you turned a page, you got this weird black-white-black flash. Can you believe anyone bought those?”

Well, it’s time for some progress. Barnes & Noble’s new Nook Color ($250) is the first big-name e-book reader with a color touch screen. It has confusing aspects, but it’s light-years better than last year’s slow, kludgy black-and-white Nook. (The company says the new Nook was designed by a new team, based in Silicon Valley and composed largely of former Palm employees.)

The hardware is handsome. It’s an 8-by-5-inch slab, half an inch thick, with an aluminum border and rubberized back. You can poke your finger through the triangular cut-out in the lower left corner. It’s just a design quirk, although maybe you could attach your key ring to it.

This Nook weighs a pound, somewhere between the Kindle (8.5 ounces) and the iPad (1.5 pounds). The color screen means you’ll have to recharge the battery every few days, rather than every few weeks. The animations are a little jerky, and the screen often doesn’t “hear” your tap the first time. But otherwise, the Color Nook is fast enough.

As for the touch screen — well, you know what? All e-readers should have touch screens. Once you tap to open a book, swipe the page to turn it and drag your finger on the Brightness slider, using a joystick to move the cursor on an E Ink screen seems indirect and antique.

The color screen is bright and beautiful. Magazines, for example, look spectacular. You can subscribe to any of 70 magazines (the first two weeks are free) or buy individual issues. You get the whole layout, including ads; it’s great.

Of course, you can’t read a full-size magazine page when it’s shrunk onto a 7-inch screen. So you navigate as if on an iPhone: you spread two fingers to zoom in, and drag a finger to pan around.

You can also summon a scrolling row of colorful page miniatures at the bottom of the screen, for ease of navigation. Some magazines even have an Article View: a scrolling, vertical, uncluttered column of black-on-white text that’s easy to read. The original magazine layout lies behind it for context.

Children’s books also benefit enormously from color, and they get special treatment on the Color Nook. You can tap the text on any page to enlarge it. Some titles — 300 by year’s end, the company says — offer a Read to Me button, so that your young reader can follow along with a recorded voice. My 6-year-old loved the effect and begged for more.

As on other e-readers, you can subscribe to newspapers; if you’re in a Wi-Fi hot spot, the paper arrives on your reader automatically in the middle of the night, ready for your commute. The photos look great in color. But the rest of the newspaper is bizarrely spartan and unimaginative, especially compared to the elaborate magazine mode. There’s no sense of layout; the whole thing looks like a beginner’s blog.

Color doesn’t add much to regular books. (Barnes & Noble says that its attractively redesigned online store offers two million books. About 1.5 million of those, however, are free, very old, often obscure books scanned by Google.)

But all books benefit from the Nook’s self-illuminating, laptop-style screen. The bedtime routine of many a Kindle owner — wedging a flashlight behind one ear — is a thing of the past.

In sunshine, you can still read the Color Nook, though not as easily as an E Ink screen. (Glare is sometimes a problem, too.) The question is, where do you do more reading: in sunlight, or at night? Only you can answer that question.

That’s not the only decision I can’t make for you. Another one is, Where do you stand on the features-versus-complexity issue?

The Nook Color is absolutely bristling with features. Notes, highlighting, bookmarks, instant dictionary definitions, quick Wikipedia or Google lookups of a chosen word. You can select passages of text and post them to your Twitter or Facebook accounts. (The Nook Color gets online only in Wi-Fi hot spots.)

There’s a basic, built-in Web browser. A music player. An image and video viewer. There’s a MicroSD memory-card slot, so you can expand the Nook’s storage from 8 gigabytes (6,000 books) to 40 gigabytes (35,000 books, just enough to hold the complete James Patterson collection.)

The “Lend Me” feature from the first Nook is still here, but it’s still laughably restrictive. You can lend a book only once, to one person, for two weeks, during which time you can’t read it. (You can’t read it while your loan offer is pending, either — another week.) You can lend only books whose publishers have agreed to it, and precious few have. Of this week’s 15 New York Times fiction best sellers, only two are lendable.

And as with all commercial e-books, you still can’t sell or even give away a book when you’re finished with it.

This Nook is customizable to a dizzying degree. You have three “home screens,” where you can drag icons for books and magazines, and also a Library bookshelf, where you can install, name and fill new shelves. You can change the font (of books), and also the type size, the margin width, the line spacing and even the background color. Some of the color schemes are surprisingly soothing.

There are even apps, for heaven’s sake. Yes, the Color Nook runs on Google’s free Android operating system; but no, it doesn’t run apps designed for Android phones. It comes with some starter apps, like Sudoku, a crossword and a Pandora radio app; the company says programmers will soon be able to write additional Nook programs.

The price you pay is complexity. The Color Nook offers far too many pop-up control racks. There’s the Quick Nav bar, the Status Bar, the Media Bar, the Library, the Daily Shelf and the Recent Items menu. It will take you quite awhile to master what’s where.

The bigger problem, actually, is the wild inconsistency of features. It’s as if Barnes & Noble assigned the magazine-reading app to one team, books to another and newspapers to a third.

For example, the screen image rotates when you turn the Nook 90 degrees — but only in magazines and Web pages, not newspapers or books. Children’s books appear only the wide way; adult books, only the tall way.

You can hold your finger on a word to add a note or look up a definition — but only in books and newspapers, not magazines. A single tap brings up the row of page thumbnails across the bottom — but only in magazines, not books or newspapers. You can spread two fingers apart to zoom into a magazine page — but not a Web page, book or newspaper. You swipe your finger horizontally to turn pages in books — but vertically to turn pages in PDF documents.

In short, the Nook Color doesn’t have anything close to the refinement and consistency of, say, an iPad or even a Kindle. At the same time, the Nook Color feels more modern and powerful than the Kindle. It also feels more like a computer than the Kindle, which is both a blessing and a curse.

Yes, five years from now, we’ll laugh at this reader, too — but not derisively. As we unwrap our all-color, all-touch screen e-book readers under the 2015 tree, we’ll remember this machine as the one that showed the way.

E-mail: pogue@nytimes.com

T. A. Watson Dead; Made First Phone

December 15, 1934
OBITUARY

T. A. Watson Dead; Made First Phone

By THE ASSOCIATED PRESS

ST. PETERSBURG, FLA., Dec. 14 -- Thomas A. Watson, manufacturer of the first telephone instrument and first to hear a human voice over the device, that of its inventor, Alexander Graham Bell, died suddenly of heart disease here last night at his Winter home on Pass-Grille Key. He was 80 years old.

Mr. Watson came here three weeks ago from his home on Beacon Street, Boston. He had been a Winter visitor here since 1918.

In an interview here several years ago Mr. Watson described how an accident, involving spilled acid, resulted in the first actual reception of a human voice over a wire on March 10, 1876.

Professor Bell and Mr. Watson had arranged wires leading from a room on the top floor of a Boston boarding house to a room on the floor below. The apparatus was arranged for transmission of the voice in one direction only.

A Historic Shout.

Watson was waiting tensely in the room below, with the reception apparatus held against his ear. Suddenly he heard Dr. Bell shout excitedly:

"Mr. Watson! Come here; I want--!"

Struck with the realization that he had actually heard Professor Bell over the wire, Watson dashed jubilantly upstairs.

"I heard you! I heard you!" he gasped.

Then he noticed Professor Bell brushing frantically at his arms and clothing. He had accidentally spilled a bottle of acid upon himself. His summons over the wire, made with little hope it would be heard, was really one for assistance.

Mr. Watson said Professor Bell forgot about the acid when he learned his voice had been heard over the wire by his associate.

Partner of Bell.

Bell and Watson became acquainted during the apprenticeship of Watson in a machine shop at Boston, where experimental machinery was being made for Professor Bell. The latter at that time was a teacher of deaf mutes in Boston. It was while experimenting with the vibration of the drum of a deaf man's ear that he first became convinced of the possibility of conveying the human voice by wire.

After two years' employment in the machine shop Mr. Watson formed a partnership with Professor Bell. They rigged up a secret laboratory in a cellar at Salem, Mass., and Watson agreed to devote all his time to perfecting the Bell inventions in consideration of a share in the Bell patents.

On Oct. 9, 1876, they had so perfected the telephone that they held a conversation between Boston and Cambridge over a two-mile wire.

Nearly forty years after their first telephone conversation, Dr. Bell and Mr. Watson had the honor of being the first persons to talk by telephone across the American continent. Meanwhile, they had seen their invention grow steadily until more than 13,000,000 telephones were in use throughout the world.

In 1920 Mr. Watson visioned telephone conversations across the Atlantic Ocean as "only the beginning of modern development in this method of communication." Six years later he predicted that in the future "man will speak to man by mental telepathy."