2011年5月30日 星期一

The Optimism Bias

The Optimism Bias

By Tali Sharot Saturday, May 28, 2011

We like to think of ourselves as rational creatures. We watch our backs, weigh the odds, pack an umbrella. But both neuroscience and social science suggest that we are more optimistic than realistic. On average, we expect things to turn out better than they wind up being. People hugely underestimate their chances of getting divorced, losing their job or being diagnosed with cancer; expect their children to be extraordinarily gifted; envision themselves achieving more than their peers; and overestimate their likely life span (sometimes by 20 years or more).

The belief that the future will be much better than the past and present is known as the optimism bias. It abides in every race, region and socioeconomic bracket. Schoolchildren playing when-I-grow-up are rampant optimists, but so are grownups: a 2005 study found that adults over 60 are just as likely to see the glass half full as young adults.

You might expect optimism to erode under the tide of news about violent conflicts, high unemployment, tornadoes and floods and all the threats and failures that shape human life. Collectively we can grow pessimistic — about the direction of our country or the ability of our leaders to improve education and reduce crime. But private optimism, about our personal future, remains incredibly resilient. A survey conducted in 2007 found that while 70% thought families in general were less successful than in their parents' day, 76% of respondents were optimistic about the future of their own family. (See "The Case for Optimism" in TIME's special: 10 Ideas That Will Change the World.)

Overly positive assumptions can lead to disastrous miscalculations — make us less likely to get health checkups, apply sunscreen or open a savings account, and more likely to bet the farm on a bad investment. But the bias also protects and inspires us: it keeps us moving forward rather than to the nearest high-rise ledge. Without optimism, our ancestors might never have ventured far from their tribes and we might all be cave dwellers, still huddled together and dreaming of light and heat.

To make progress, we need to be able to imagine alternative realities — better ones — and we need to believe that we can achieve them. Such faith helps motivate us to pursue our goals. Optimists in general work longer hours and tend to earn more. Economists at Duke University found that optimists even save more. And although they are not less likely to divorce, they are more likely to remarry — an act that is, as Samuel Johnson wrote, the triumph of hope over experience. (See if the global "happiness" index will ever beat out the GDP.)

Even if that better future is often an illusion, optimism has clear benefits in the present. Hope keeps our minds at ease, lowers stress and improves physical health. Researchers studying heart-disease patients found that optimists were more likely than nonoptimistic patients to take vitamins, eat low-fat diets and exercise, thereby reducing their overall coronary risk. A study of cancer patients revealed that pessimistic patients under the age of 60 were more likely to die within eight months than nonpessimistic patients of the same initial health, status and age.

In fact, a growing body of scientific evidence points to the conclusion that optimism may be hardwired by evolution into the human brain. The science of optimism, once scorned as an intellectually suspect province of pep rallies and smiley faces, is opening a new window on the workings of human consciousness. What it shows could fuel a revolution in psychology, as the field comes to grips with accumulating evidence that our brains aren't just stamped by the past. They are constantly being shaped by the future.

See TIME's special report "How to Live 100 Years."


Hardwired for Hope?
I would have liked to tell you that my work on optimism grew out of a keen interest in the positive side of human nature. The reality is that I stumbled onto the brain's innate optimism by accident. After living through Sept. 11, 2001, in New York City, I had set out to investigate people's memories of the terrorist attacks. I was intrigued by the fact that people felt their memories were as accurate as a videotape, while often they were filled with errors. A survey conducted around the country showed that 11 months after the attacks, individuals' recollections of their experience that day were consistent with their initial accounts (given in September 2011) only 63% of the time. They were also poor at remembering details of the event, such as the names of the airline carriers. Where did these mistakes in memory come from?

Scientists who study memory proposed an intriguing answer: memories are susceptible to inaccuracies partly because the neural system responsible for remembering episodes from our past might not have evolved for memory alone. Rather, the core function of the memory system could in fact be to imagine the future — to enable us to prepare for what has yet to come. The system is not designed to perfectly replay past events, the researchers claimed. It is designed to flexibly construct future scenarios in our minds. As a result, memory also ends up being a reconstructive process, and occasionally, details are deleted and others inserted. (See why happiness isn't always good.)

To test this, I decided to record the brain activity of volunteers while they imagined future events — not events on the scale of 9/11, but events in their everyday lives — and compare those results with the pattern I observed when the same individuals recalled past events. But something unexpected occurred. Once people started imagining the future, even the most banal life events seemed to take a dramatic turn for the better. Mundane scenes brightened with upbeat details as if polished by a Hollywood script doctor. You might think that imagining a future haircut would be pretty dull. Not at all. Here is what one of my participants pictured: "I was getting my hair cut to donate to Locks of Love [a charity that fashions wigs for young cancer patients]. It had taken me years to grow it out, and my friends were all there to help celebrate. We went to my favorite hair place in Brooklyn and then went to lunch at our favorite restaurant."

I asked another participant to imagine a plane ride. "I imagined the takeoff — my favorite! — and then the eight-hour-long nap in between and then finally landing in Krakow and clapping for the pilot for providing the safe voyage," she responded. No tarmac delays, no screaming babies. The world, only a year or two into the future, was a wonderful place to live in.

If all our participants insisted on thinking positively when it came to what lay in store for them personally, what does that tell us about how our brains are wired? Is the human tendency for optimism a consequence of the architecture of our brains? (See the new science of happiness.)

The Human Time Machine
To think positively about our prospects, we must first be able to imagine ourselves in the future. Optimism starts with what may be the most extraordinary of human talents: mental time travel, the ability to move back and forth through time and space in one's mind. Although most of us take this ability for granted, our capacity to envision a different time and place is in fact critical to our survival.

It is easy to see why cognitive time travel was naturally selected for over the course of evolution. It allows us to plan ahead, to save food and resources for times of scarcity and to endure hard work in anticipation of a future reward. It also lets us forecast how our current behavior may influence future generations. If we were not able to picture the world in a hundred years or more, would we be concerned with global warming? Would we attempt to live healthily? Would we have children?

While mental time travel has clear survival advantages, conscious foresight came to humans at an enormous price — the understanding that somewhere in the future, death awaits. Ajit Varki, a biologist at the University of California, San Diego, argues that the awareness of mortality on its own would have led evolution to a dead end. The despair would have interfered with our daily function, bringing the activities needed for survival to a stop. The only way conscious mental time travel could have arisen over the course of evolution is if it emerged together with irrational optimism. Knowledge of death had to emerge side by side with the persistent ability to picture a bright future.

The capacity to envision the future relies partly on the hippocampus, a brain structure that is crucial to memory. Patients with damage to their hippocampus are unable to recollect the past, but they are also unable to construct detailed images of future scenarios. They appear to be stuck in time. The rest of us constantly move back and forth in time; we might think of a conversation we had with our spouse yesterday and then immediately of our dinner plans for later tonight.

But the brain doesn't travel in time in a random fashion. It tends to engage in specific types of thoughts. We consider how well our kids will do in life, how we will obtain that sought-after job, afford that house on the hill and find perfect love. We imagine our team winning the crucial game, look forward to an enjoyable night on the town or picture a winning streak at the blackjack table. We also worry about losing loved ones, failing at our job or dying in a terrible plane crash — but research shows that most of us spend less time mulling over negative outcomes than we do over positive ones. When we do contemplate defeat and heartache, we tend to focus on how these can be avoided. (See 20 ways to get and stay happy.)

Findings from a study I conducted a few years ago with prominent neuroscientist Elizabeth Phelps suggest that directing our thoughts of the future toward the positive is a result of our frontal cortex's communicating with subcortical regions deep in our brain. The frontal cortex, a large area behind the forehead, is the most recently evolved part of the brain. It is larger in humans than in other primates and is critical for many complex human functions such as language and goal setting.

Using a functional magnetic resonance imaging (fMRI) scanner, we recorded brain activity in volunteers as they imagined specific events that might occur to them in the future. Some of the events that I asked them to imagine were desirable (a great date or winning a large sum of money), and some were undesirable (losing a wallet, ending a romantic relationship). The volunteers reported that their images of sought-after events were richer and more vivid than those of unwanted events.

This matched the enhanced activity we observed in two critical regions of the brain: the amygdala, a small structure deep in the brain that is central to the processing of emotion, and the rostral anterior cingulate cortex (rACC), an area of the frontal cortex that modulates emotion and motivation. The rACC acts like a traffic conductor, enhancing the flow of positive emotions and associations. The more optimistic a person was, the higher the activity in these regions was while imagining positive future events (relative to negative ones) and the stronger the connectivity between the two structures. (See "Do We need $75,000 a Year to Be Happy?")

The findings were particularly fascinating because these precise regions — the amygdala and the rACC — show abnormal activity in depressed individuals. While healthy people expect the future to be slightly better than it ends up being, people with severe depression tend to be pessimistically biased: they expect things to be worse than they end up being. People with mild depression are relatively accurate when predicting future events. They see the world as it is. In other words, in the absence of a neural mechanism that generates unrealistic optimism, it is possible all humans would be mildly depressed.

Can Optimism Change Reality?
The problem with pessimistic expectations, such as those of the clinically depressed, is that they have the power to alter the future; negative expectations shape outcomes in a negative way. How do expectations change reality?

See how negative thinking affects your health.

To answer this question, my colleague, cognitive neuroscientist Sara Bengtsson, devised an experiment in which she manipulated positive and negative expectations of students while their brains were scanned and tested their performance on cognitive tasks. To induce expectations of success, she primed college students with words such as smart, intelligent and clever just before asking them to perform a test. To induce expectations of failure, she primed them with words like stupid and ignorant. The students performed better after being primed with an affirmative message.

Examining the brain-imaging data, Bengtsson found that the students' brains responded differently to the mistakes they made depending on whether they were primed with the word clever or the word stupid. When the mistake followed positive words, she observed enhanced activity in the anterior medial part of the prefrontal cortex (a region that is involved in self-reflection and recollection). However, when the participants were primed with the word stupid, there was no heightened activity after a wrong answer. It appears that after being primed with the word stupid, the brain expected to do poorly and did not show signs of surprise or conflict when it made an error. (See how playing the part of an optimist can help your health.)

A brain that doesn't expect good results lacks a signal telling it, "Take notice — wrong answer!" These brains will fail to learn from their mistakes and are less likely to improve over time. Expectations become self-fulfilling by altering our performance and actions, which ultimately affects what happens in the future. Often, however, expectations simply transform the way we perceive the world without altering reality itself. Let me give you an example. While writing these lines, my friend calls. He is at Heathrow Airport waiting to get on a plane to Austria for a skiing holiday. His plane has been delayed for three hours already, because of snowstorms at his destination. "I guess this is both a good and bad thing," he says. Waiting at the airport is not pleasant, but he quickly concludes that snow today means better skiing conditions tomorrow. His brain works to match the unexpected misfortune of being stuck at the airport to its eager anticipation of a fun getaway.

A canceled flight is hardly tragic, but even when the incidents that befall us are the type of horrific events we never expected to encounter, we automatically seek evidence confirming that our misfortune is a blessing in disguise. No, we did not anticipate losing our job, being ill or getting a divorce, but when these incidents occur, we search for the upside. These experiences mature us, we think. They may lead to more fulfilling jobs and stable relationships in the future. Interpreting a misfortune in this way allows us to conclude that our sunny expectations were correct after all — things did work out for the best.

Silver Linings
How do we find the silver lining in storm clouds? To answer that, my colleagues — renowned neuroscientist Ray Dolan and neurologist Tamara Shiner — and I instructed volunteers in the fMRI scanner to visualize a range of medical conditions, from broken bones to Alzheimer's, and rate how bad they imagined these conditions to be. Then we asked them: If you had to endure one of the following, which would you rather have — a broken leg or a broken arm? Heartburn or asthma? Finally, they rated all the conditions again. Minutes after choosing one particular illness out of many, the volunteers suddenly found that the chosen illness was less intimidating. A broken leg, for example, may have been thought of as "terrible" before choosing it over some other malady. However, after choosing it, the subject would find a silver lining: "With a broken leg, I will be able to lie in bed watching TV, guilt-free." (See how self-help can stop negative thoughts.)

In our study, we also found that people perceived adverse events more positively if they had experienced them in the past. Recording brain activity while these reappraisals took place revealed that highlighting the positive within the negative involves, once again, a tête-à-tête between the frontal cortex and subcortical regions processing emotional value. While contemplating a mishap, like a broken leg, activity in the rACC modulated signals in a region called the striatum that conveyed the good and bad of the event in question — biasing activity in a positive direction.

It seems that our brain possesses the philosopher's stone that enables us to turn lead into gold and helps us bounce back to normal levels of well-being. It is wired to place high value on the events we encounter and put faith in its own decisions. This is true not only when forced to choose between two adverse options (such as selecting between two courses of medical treatment) but also when we are selecting between desirable alternatives. Imagine you need to pick between two equally attractive job offers. Making a decision may be a tiring, difficult ordeal, but once you make up your mind, something miraculous happens. Suddenly — if you are like most people — you view the chosen offer as better than you did before and conclude that the other option was not that great after all. According to social psychologist Leon Festinger, we re-evaluate the options postchoice to reduce the tension that arises from making a difficult decision between equally desirable options.


In a brain-imaging study I conducted with Ray Dolan and Benedetto De Martino in 2009, we asked subjects to imagine going on vacation to 80 different destinations and rate how happy they thought they would be in each place. We then asked them to select one destination from two choices that they had rated exactly the same. Would you choose Paris over Brazil? Finally, we asked them to imagine and rate all the destinations again. Seconds after picking between two destinations, people rated their selected destination higher than before and rated the discarded choice lower than before.

The brain-imaging data revealed that these changes were happening in the caudate nucleus, a cluster of nerve cells that is part of the striatum. The caudate has been shown to process rewards and signal their expectation. If we believe we are about to be given a paycheck or eat a scrumptious chocolate cake, the caudate acts as an announcer broadcasting to other parts of the brain, "Be ready for something good." After we receive the reward, the value is quickly updated. If there is a bonus in the paycheck, this higher value will be reflected in striatal activity. If the cake is disappointing, the decreased value will be tracked so that next time our expectations will be lower.

In our experiment, after a decision was made between two destinations, the caudate nucleus rapidly updated its signal. Before choosing, it might signal "thinking of something great" while imagining both Greece and Thailand. But after choosing Greece, it now broadcast "thinking of something remarkable!" for Greece and merely "thinking of something good" for Thailand. (See pictures of couples in love.)

True, sometimes we regret our decisions; our choices can turn out to be disappointing. But on balance, when you make a decision — even if it is a hypothetical choice — you will value it more and expect it to bring you pleasure.

This affirmation of our decisions helps us derive heightened pleasure from choices that might actually be neutral. Without this, our lives might well be filled with second-guessing. Have we done the right thing? Should we change our mind? We would find ourselves stuck, overcome by indecision and unable to move forward.

The Puzzle of Optimism
While the past few years have seen important advances in the neuroscience of optimism, one enduring puzzle remained. How is it that people maintain this rosy bias even when information challenging our upbeat forecasts is so readily available? Only recently have we been able to decipher this mystery, by scanning the brains of people as they process both positive and negative information about the future. The findings are striking: when people learn, their neurons faithfully encode desirable information that can enhance optimism but fail at incorporating unexpectedly undesirable information. When we hear a success story like Mark Zuckerberg's, our brains take note of the possibility that we too may become immensely rich one day. But hearing that the odds of divorce are almost 1 in 2 tends not to make us think that our own marriages may be destined to fail. (See "A Primer for Pessimists.")

Why would our brains be wired in this way? It is tempting to speculate that optimism was selected by evolution precisely because, on balance, positive expectations enhance the odds of survival. Research findings that optimists live longer and are healthier, plus the fact that most humans display optimistic biases — and emerging data that optimism is linked to specific genes — all strongly support this hypothesis. Yet optimism is also irrational and can lead to unwanted outcomes. The question then is, How can we remain hopeful — benefiting from the fruits of optimism — while at the same time guarding ourselves from its pitfalls?

I believe knowledge is key. We are not born with an innate understanding of our biases. The brain's illusions have to be identified by careful scientific observation and controlled experiments and then communicated to the rest of us. Once we are made aware of our optimistic illusions, we can act to protect ourselves. The good news is that awareness rarely shatters the illusion. The glass remains half full. It is possible, then, to strike a balance, to believe we will stay healthy, but get medical insurance anyway; to be certain the sun will shine, but grab an umbrella on our way out — just in case.

Adapted from The Optimism Bias, by Tali Sharot. Copyright © 2011 Tali Sharot. Reprinted with permission of Pantheon Books, a division of Random House Inc. All rights reserved

Sharot is a research fellow at University College London's Wellcome Trust Centre for Neuroimaging

See the perils of positive thinking.

A man-made world

當然這一領悟
多少與"自然科學"和"人工科學"的區分方式相關

The Anthropocene


A man-made world

Science is recognising humans as a geological force to be reckoned with

THE here and now are defined by astronomy and geology. Astronomy takes care of the here: a planet orbiting a yellow star embedded in one of the spiral arms of the Milky Way, a galaxy that is itself part of the Virgo supercluster, one of millions of similarly vast entities dotted through the sky. Geology deals with the now: the 10,000-year-old Holocene epoch, a peculiarly stable and clement part of the Quaternary period, a time distinguished by regular shifts into and out of ice ages. The Quaternary forms part of the 65m-year Cenozoic era, distinguished by the opening of the North Atlantic, the rise of the Himalayas, and the widespread presence of mammals and flowering plants. This era in turn marks the most recent part of the Phanerozoic aeon, the 540m-year chunk of the Earth’s history wherein rocks with fossils of complex organisms can be found. The regularity of celestial clockwork and the solid probity of rock give these co-ordinates a reassuring constancy.
Now there is a movement afoot to change humanity’s co-ordinates. In 2000 Paul Crutzen, an eminent atmospheric chemist, realised he no longer believed he was living in the Holocene. He was living in some other age, one shaped primarily by people. From their trawlers scraping the floors of the seas to their dams impounding sediment by the gigatonne, from their stripping of forests to their irrigation of farms, from their mile-deep mines to their melting of glaciers, humans were bringing about an age of planetary change. With a colleague, Eugene Stoermer, Dr Crutzen suggested this age be called the Anthropocene—“the recent age of man”.
The term has slowly picked up steam, both within the sciences (the International Commission on Stratigraphy, ultimate adjudicator of the geological time scale, is taking a formal interest) and beyond. This May statements on the environment by concerned Nobel laureates and the Pontifical Academy of Sciences both made prominent use of the term, capitalising on the way in which it dramatises the sheer scale of human activity.
The advent of the Anthropocene promises more, though, than a scientific nicety or a new way of grabbing the eco-jaded public’s attention. The term “paradigm shift” is bandied around with promiscuous ease. But for the natural sciences to make human activity central to its conception of the world, rather than a distraction, would mark such a shift for real. For centuries, science has progressed by making people peripheral. In the 16th century Nicolaus Copernicus moved the Earth from its privileged position at the centre of the universe. In the 18th James Hutton opened up depths of geological time that dwarf the narrow now. In the 19th Charles Darwin fitted humans onto a single twig of the evolving tree of life. As Simon Lewis, an ecologist at the University of Leeds, points out, embracing the Anthropocene as an idea means reversing this trend. It means treating humans not as insignificant observers of the natural world but as central to its workings, elemental in their force.
Sous la plage, les pavés
The most common way of distinguishing periods of geological time is by means of the fossils they contain. On this basis picking out the Anthropocene in the rocks of days to come will be pretty easy. Cities will make particularly distinctive fossils. A city on a fast-sinking river delta (and fast-sinking deltas, undermined by the pumping of groundwater and starved of sediment by dams upstream, are common Anthropocene environments) could spend millions of years buried and still, when eventually uncovered, reveal through its crushed structures and weird mixtures of materials that it is unlike anything else in the geological record.
The fossils of living creatures will be distinctive, too. Geologists define periods through assemblages of fossil life reliably found together. One of the characteristic markers of the Anthropocene will be the widespread remains of organisms that humans use, or that have adapted to life in a human-dominated world. According to studies by Erle Ellis, an ecologist at the University of Maryland, Baltimore County, the vast majority of ecosystems on the planet now reflect the presence of people. There are, for instance, more trees on farms than in wild forests. And these anthropogenic biomes are spread about the planet in a way that the ecological arrangements of the prehuman world were not. The fossil record of the Anthropocene will thus show a planetary ecosystem homogenised through domestication.
More sinisterly, there are the fossils that will not be found. Although it is not yet inevitable, scientists warn that if current trends of habitat loss continue, exacerbated by the effects of climate change, there could be an imminent and dramatic number of extinctions before long.
All these things would show future geologists that humans had been present. But though they might be diagnostic of the time in which humans lived, they would not necessarily show that those humans shaped their time in the way that people pushing the idea of the Anthropocene want to argue. The strong claim of those announcing the recent dawning of the age of man is that humans are not just spreading over the planet, but are changing the way it works.
Such workings are the province of Earth-system science, which sees the planet not just as a set of places, or as the subject of a history, but also as a system of forces, flows and feedbacks that act upon each other. This system can behave in distinctive and counterintuitive ways, including sometimes flipping suddenly from one state to another. To an Earth-system scientist the difference between the Quaternary period (which includes the Holocene) and the Neogene, which came before it, is not just what was living where, or what the sea level was; it is that in the Neogene the climate stayed stable whereas in the Quaternary it swung in and out of a series of ice ages. The Earth worked differently in the two periods.
The clearest evidence for the system working differently in the Anthropocene comes from the recycling systems on which life depends for various crucial elements. In the past couple of centuries people have released quantities of fossil carbon that the planet took hundreds of millions of years to store away. This has given them a commanding role in the planet’s carbon cycle.
Although the natural fluxes of carbon dioxide into and out of the atmosphere are still more than ten times larger than the amount that humans put in every year by burning fossil fuels, the human addition matters disproportionately because it unbalances those natural flows. As Mr Micawber wisely pointed out, a small change in income can, in the absence of a compensating change in outlays, have a disastrous effect. The result of putting more carbon into the atmosphere than can be taken out of it is a warmer climate, a melting Arctic, higher sea levels, improvements in the photosynthetic efficiency of many plants, an intensification of the hydrologic cycle of evaporation and precipitation, and new ocean chemistry.
All of these have knock-on effects both on people and on the processes of the planet. More rain means more weathering of mountains. More efficient photosynthesis means less evaporation from croplands. And the changes in ocean chemistry are the sort of thing that can be expected to have a direct effect on the geological record if carbon levels rise far enough.
At a recent meeting of the Geological Society of London that was devoted to thinking about the Anthropocene and its geological record, Toby Tyrrell of the University of Southampton pointed out that pale carbonate sediments—limestones, chalks and the like—cannot be laid down below what is called a “carbonate compensation depth”. And changes in chemistry brought about by the fossil-fuel carbon now accumulating in the ocean will raise the carbonate compensation depth, rather as a warmer atmosphere raises the snowline on mountains. Some ocean floors which are shallow enough for carbonates to precipitate out as sediment in current conditions will be out of the game when the compensation depth has risen, like ski resorts too low on a warming alp. New carbonates will no longer be laid down. Old ones will dissolve. This change in patterns of deep-ocean sedimentation will result in a curious, dark band of carbonate-free rock—rather like that which is seen in sediments from the Palaeocene-Eocene thermal maximum, an episode of severe greenhouse warming brought on by the release of pent-up carbon 56m years ago.
The fix is in
No Dickensian insights are necessary to appreciate the scale of human intervention in the nitrogen cycle. One crucial part of this cycle—the fixing of pure nitrogen from the atmosphere into useful nitrogen-containing chemicals—depends more or less entirely on living things (lightning helps a bit). And the living things doing most of that work are now people (see chart). By adding industrial clout to the efforts of the microbes that used to do the job single-handed, humans have increased the annual amount of nitrogen fixed on land by more than 150%. Some of this is accidental. Burning fossil fuels tends to oxidise nitrogen at the same time. The majority is done on purpose, mostly to make fertilisers. This has a variety of unwholesome consequences, most importantly the increasing number of coastal “dead zones” caused by algal blooms feeding on fertiliser-rich run-off waters.
Industrial nitrogen’s greatest environmental impact, though, is to increase the number of people. Although nitrogen fixation is not just a gift of life—it has been estimated that 100m people were killed by explosives made with industrially fixed nitrogen in the 20th century’s wars—its net effect has been to allow a huge growth in population. About 40% of the nitrogen in the protein that humans eat today got into that food by way of artificial fertiliser. There would be nowhere near as many people doing all sorts of other things to the planet if humans had not sped the nitrogen cycle up.
It is also worth noting that unlike many of humanity’s other effects on the planet, the remaking of the nitrogen cycle was deliberate. In the late 19th century scientists diagnosed a shortage of nitrogen as a planet-wide problem. Knowing that natural processes would not improve the supply, they invented an artificial one, the Haber process, that could make up the difference. It was, says Mark Sutton of the Centre for Ecology and Hydrology in Edinburgh, the first serious human attempt at geoengineering the planet to bring about a desired goal. The scale of its success outstripped the imaginings of its instigators. So did the scale of its unintended consequences.
For many of those promoting the idea of the Anthropocene, further geoengineering may now be in order, this time on the carbon front. Left to themselves, carbon-dioxide levels in the atmosphere are expected to remain high for 1,000 years—more, if emissions continue to go up through this century. It is increasingly common to hear climate scientists arguing that this means things should not be left to themselves—that the goal of the 21st century should be not just to stop the amount of carbon in the atmosphere increasing, but to start actively decreasing it. This might be done in part by growing forests (see article) and enriching soils, but it might also need more high-tech interventions, such as burning newly grown plant matter in power stations and pumping the resulting carbon dioxide into aquifers below the surface, or scrubbing the air with newly contrived chemical-engineering plants, or intervening in ocean chemistry in ways that would increase the sea’s appetite for the air’s carbon.
To think of deliberately interfering in the Earth system will undoubtedly be alarming to some. But so will an Anthropocene deprived of such deliberation. A way to try and split the difference has been propounded by a group of Earth-system scientists inspired by (and including) Dr Crutzen under the banner of “planetary boundaries”. The planetary-boundaries group, which published a sort of manifesto in 2009, argues for increased restraint and, where necessary, direct intervention aimed at bringing all sorts of things in the Earth system, from the alkalinity of the oceans to the rate of phosphate run-off from the land, close to the conditions pertaining in the Holocene. Carbon-dioxide levels, the researchers recommend, should be brought back from whatever they peak at to a level a little higher than the Holocene’s and a little lower than today’s.
The idea behind this precautionary approach is not simply that things were good the way they were. It is that the further the Earth system gets from the stable conditions of the Holocene, the more likely it is to slip into a whole new state and change itself yet further.
 You maniacs! You blew it up!
The Earth’s history shows that the planet can indeed tip from one state to another, amplifying the sometimes modest changes which trigger the transition. The nightmare would be a flip to some permanently altered state much further from the Holocene than things are today: a hotter world with much less productive oceans, for example. Such things cannot be ruled out. On the other hand, the invocation of poorly defined tipping points is a well worn rhetorical trick for stirring the fears of people unperturbed by current, relatively modest, changes.
In general, the goal of staying at or returning close to Holocene conditions seems judicious. It remains to be seen if it is practical. The Holocene never supported a civilisation of 10 billion reasonably rich people, as the Anthropocene must seek to do, and there is no proof that such a population can fit into a planetary pot so circumscribed. So it may be that a “good Anthropocene”, stable and productive for humans and other species they rely on, is one in which some aspects of the Earth system’s behaviour are lastingly changed. For example, the Holocene would, without human intervention, have eventually come to an end in a new ice age. Keeping the Anthropocene free of ice ages will probably strike most people as a good idea.
Dreams of a smart planet
That is an extreme example, though. No new ice age is due for some millennia to come. Nevertheless, to see the Anthropocene as a blip that can be minimised, and from which the planet, and its people, can simply revert to the status quo, may be to underestimate the sheer scale of what is going on.
Take energy. At the moment the amount of energy people use is part of what makes the Anthropocene problematic, because of the carbon dioxide given off. That problem will not be solved soon enough to avert significant climate change unless the Earth system is a lot less prone to climate change than most scientists think. But that does not mean it will not be solved at all. And some of the zero-carbon energy systems that solve it—continent- scale electric grids distributing solar energy collected in deserts, perhaps, or advanced nuclear power of some sort—could, in time, be scaled up to provide much more energy than today’s power systems do. As much as 100 clean terawatts, compared to today’s dirty 15TW, is not inconceivable for the 22nd century. That would mean humanity was producing roughly as much useful energy as all the world’s photosynthesis combined.
In a fascinating recent book, “Revolutions that Made the Earth”, Timothy Lenton and Andrew Watson, Earth-system scientists at the universities of Exeter and East Anglia respectively, argue that large changes in the amount of energy available to the biosphere have, in the past, always marked large transitions in the way the world works. They have a particular interest in the jumps in the level of atmospheric oxygen seen about 2.4 billion years ago and 600m years ago. Because oxygen is a particularly good way of getting energy out of organic matter (if it weren’t, there would be no point in breathing) these shifts increased sharply the amount of energy available to the Earth’s living things. That may well be why both of those jumps seem to be associated with subsequent evolutionary leaps—the advent of complex cells, in the first place, and of large animals, in the second. Though the details of those links are hazy, there is no doubt that in their aftermath the rules by which the Earth system operated had changed.
The growing availability of solar or nuclear energy over the coming centuries could mark the greatest new energy resource since the second of those planetary oxidations, 600m years ago—a change in the same class as the greatest the Earth system has ever seen. Dr Lenton (who is also one of the creators of the planetary-boundaries concept) and Dr Watson suggest that energy might be used to change the hydrologic cycle with massive desalination equipment, or to speed up the carbon cycle by drawing down atmospheric carbon dioxide, or to drive new recycling systems devoted to tin and copper and the many other metals as vital to industrial life as carbon and nitrogen are to living tissue. Better to embrace the Anthropocene’s potential as a revolution in the way the Earth system works, they argue, than to try to retreat onto a low-impact path that runs the risk of global immiseration.
Such a choice is possible because of the most fundamental change in Earth history that the Anthropocene marks: the emergence of a form of intelligence that allows new ways of being to be imagined and, through co-operation and innovation, to be achieved. The lessons of science, from Copernicus to Darwin, encourage people to dismiss such special pleading. So do all manner of cultural warnings, from the hubris around which Greek tragedies are built to the lamentation of King David’s preacher: “Vanity of vanities, all is vanity…the Earth abideth for ever…and there is no new thing under the sun.” But the lamentation of vanity can be false modesty. On a planetary scale, intelligence is something genuinely new and powerful. Through the domestication of plants and animals intelligence has remade the living environment. Through industry it has disrupted the key biogeochemical cycles. For good or ill, it will do yet more.
It may seem nonsense to think of the (probably sceptical) intelligence with which you interpret these words as something on a par with plate tectonics or photosynthesis. But dam by dam, mine by mine, farm by farm and city by city it is remaking the Earth before your eyes.

--

Anthropocene was originally coined by ecologist Eugene Stoermer but subsequently popularized by the Nobel Prize-winning scientist Paul Crutzen by analogy with the word "Holocene." The Greek roots are anthropo- meaning "human" and -cene meaning "new." Crutzen has explained, "I was at a conference where someone said something about the Holocene. I suddenly thought this was wrong. The world has changed too much. So I said: 'No, we are in the Anthropocene.' I just made up the word on the spur of the moment. Everyone was shocked. But it seems to have stuck."[6] Crutzen first used it in print in a 2000 newsletter of the International Geosphere-Biosphere Programme (IGBP), No.41. In 2008, Zalasiewicz suggested in GSA Today that an anthropocene epoch is now appropriate.[7]

2011年5月28日 星期六

UNESCO: Natural Sciences

UNESCO

UNESCO
Education
Natural Sciences
Social and Human Sciences

“The Soul of a New Machine”


“The Soul of a New Machine” --此書有譯本 讀者文摘有書摘

THE HARDY BOYS AND THE MICROKIDS MAKE A COMPUTER

“The Soul of a New Machine”

2011年5月26日 星期四

Scientific Inference (Harold Jeffreys) 科學推斷

Scientific Inference (Harold Jeffreys) 科學推斷

Scientific Inference - Google 圖書結果

Harold Jeffreys - 2011 - Science - 282 頁
CHAPTER I LOGIC AND SCIENTIFIC INFERENCE The Master said, Yu, shall I tell you what knowledge is? When you know a thing, to know that you know it, ...

Harold Jeffreys,Scientific Inference, Third edition,Cambridge - Amazon

- [ 翻譯這個網頁 ]
A scientific theory is originally based on a particular set of observations. How can it be extended to apply outside this original range of cases? ...

---
hc書評
本書翻譯問題不少
網路上本書有第三版 不知為何根據第二版翻譯
索引作得冗而不準確
名著 The Grammar of Science 之 Grammar 不是"語法"
是"基本原理"意思 中國有一翻譯本或是 "科學典範"
唯一附英文的羅素說法也翻譯錯誤
fallacy of certainty 等等都將 fallacy 翻譯成"論"......

科學推斷

  • 作者:(英)杰弗里
  • 出版社:廈門大學出版社
  • 出版日期:2011年

2011年5月25日 星期三

'two-faced' rupture caused Japanese destruction


Share

Stanford research finds unusual 'two-faced' rupture caused Japanese destruction

Updated: 05/24/2011 10:07:36 PM PDT

The catastrophe that struck Japan in March was triggered by a sequence of unusual geologic events, according to new research by a team of Stanford University and University of Tokyo scientists.

The fault that generated the Tohoku-Oki earthquake did not fracture in the usual way, they report in the latest issue of the journal Science Express. Instead, it ruptured in a "flip-flop" fashion -- first breaking westward, then eastward.

The first motion violently shook Japan, with magnitude-9 shocks. The second motion -- generating magnitude-6.5 aftershocks -- deformed the seafloor with such force that a huge tsunami was triggered.

Damage from the March 11 earthquake was extensive in part simply because it was so large, according to Stanford geophysicist Greg Beroza.

But the two-faced rupture made the devastation greater than it might have been otherwise, he said.

"Now that this "... has been observed in the Tohoku-Oki earthquake, what we need to figure out is whether similar earthquakes -- and large tsunamis -- could happen in other subduction zones around the world," Beroza said.

The project was a collaborative effort. Stanford's Beroza and graduate student Annemarie Baltay measured the energy released by the quake, while University of Tokyo's Satoshi Ide modeled the slippage of the fault.

There is a denser network of seismometers in Japan than in any other place in the world, Beroza said. These sensors provided the team

with much more detailed data than is normally available after an earthquake, enabling them to discern the different phases of the March 11 temblor with much greater resolution than usual.

The earthquake occurred in a known subduction zone, where one great tectonic plate is being forced down under another tectonic plate and into the Earth's interior along an active fault.

But no one predicted its ferocity. The earthquake was the largest ever recorded in Japan, and tied for fourth largest in the world since 1900. The 30-foot tsunami washed over sea walls and swept inland for miles. The death toll is expected to be more than 20,000.

The deeper part of the quake's fault plane, which sloped downward to the west, was bound by dense, hard rock on each side. This rock transmitted the seismic waves very efficiently, maximizing the shaking.

The shallower part of the fault surface, which sloped upward to the east and surfaced at the Japan Trench -- where the overlying plate is warped downward by the motion of the descending plate -- had massive slip.

This punched the ocean water upward with great ferocity. To make matters worse, the rupture occurred in deep ocean, so a large volume of water was displaced.

"It exploded into tremendously large slip," Beroza said. "It displaced the seafloor dramatically.

"This amplification of slip near the surface was predicted in computer simulations of earthquake rupture, but this is the first time we have clearly seen it occur in a real earthquake."

2011年5月22日 星期日

PillPick Robot

Swisslog's pharmacy automation solutions offer complete automation from the packaging of bulk medications, to storage, dispensing, and logistics, as well as Inventory Management Software offering supply chain control from the dock to the patient, including 340B drug pricing. View Swisslog's North America Solutions for the inpatient pharmacy and our solutions for optimizing drug management operations.

Swisslog’s PillPick pharmacy automation system provides a comprehensive approach from unit dose packaging through medication dispensing. PillPick offers the ultimate automated pharmacy system providing patient safety, medication dispensing efficiency, and pharmacy inventory management.

Swisslog also offers BoxPicker, a high-density automated pharmacy warehouse for the storage and dispensing of medications, refrigerated medications, and supplies. BoxPicker is faster and more secure than vertical carousels.


MedRover™ to Debut at AONE Annual Meeting & Exhibition

DENVER, Colo. (April 5, 2011) – Swisslog, a leading provider of automated materials transport and medication management solutions for hospitals, today announced that its MedRover™ mobile dispensing cabinet will debut next week at the American Organization of Nursing Executives (AONE) Annual Meeting & Exhibition in San Diego.

Read the Press Release.



ATP High-Speed Tablet Packager (North America)

Swisslog’s ATP system (available only in North America) is a versatile packaging solution that provides easy filling and refilling of medications through high-speed dispensing, accurate labeling of medication pouches, flexible printing package sizes and bar-coding. The packager interfaces with pharmacy information systems for automatic replenishment of unit-based cabinets, patient carts or nurse servers.


Pharmacy Automation Systems

PillPick pharmacy automation and drug managment system Swisslog’s PillPick system bar-code packages, stores and dispenses unit dose medications. Unit doses are automatically placed by PillPicker, Swisslog’s pharmacy packaging unit, into bar-code labeled bags and sealed.

Swisslog’s medication storage and dispensing unit, DrugNest, is a high-density pharmacy robot for automated storage and medication dispensing of bar-coded unit doses. Packaged, unit dose medications are loaded automatically from the PillPicker to the DrugNest without intermediate material handling. Pharmacy dedication dispensing is integrated to downstream pharmacy automation components including cassette filling and PickRing – Swisslog’s unique medication dispensing method.



Pharmacy Storage/Retrieval System

BoxPicker

BoxPicker pharmacy storage and retrieval warehouse system, your solution to vertical carousels Swisslog's BoxPicker is a high-density, automated pharmacy warehouse for the storage and dispensing of medications, refrigerated medications, and supplies. BoxPicker is a cost effective alternative to vertical carousel storage and retrieval, and refrigerated drug management in the hospital

pharmacy. Visit our Hospital Pharmacy Drug Storage and Retrieval System page for more information on the benefits of BoxPicker for the pharmacy.

Swisslog also offers StockManager, a modular bar-coding solution with complete hospital pharmacy medication inventory management and automatic restocking ordering capability. Contact Swisslog Healthcare Solutions for more information.




Loyola University Hospital in Chicago has installed a robotic pharmacist on premises in an attempt to reduce the effect of human error from the pharmacy storage, packaging, and distribution system. The robot, dubbed PillPick, is produced by Swisslog of Buchs, Switzerland

The robot places single doses of medication in small plastic bags. Each bag has a bar code that identifies the drug. When the system is fully implemented, the nurse will scan the bar code on the medication bag, along with the bar code on the patient’s wrist band. If the computer detects it’s the wrong drug or wrong dose, a pop-up warning will appear and the computer will sound an alert.
Hospitals around the country are beginning to use robotics in the pharmacy. Loyola is the first hospital in the Midwest to use the most advanced system of its kind. It’s called PillPick,® manufactured by SwissLog Healthcare Solutions.
"We looked at five systems, and this one was the most innovative," said Richard Ricker, administrative director of the pharmacy department, Loyola.
The system is 28 feet long and 13 feet wide. At the front end, a robot arm packages medications in single-dose bags. At the back end, a patient’s medication bags are arranged in order of administration and attached to a plastic ring. A card attached to the ring specifies each drug, along with important patient information.
The robot packages 3,200 medications, including tablets, capsules, vials, ampules and suppositories. It works around the clock.
The robot is designed to eliminate the type of serious human error involving Quaid’s twins last November. The infants were supposed to receive 10 units per millimeter of the blood thinner Heparin. Instead they received 10,000 units. The 10-unit vials and 10,000-unit vials looked similar, and a pharmacy technician mistakenly placed them in the same drawer.

Product page: PillPick automated unit dose packaging, storage and dispensing system…
Press release: $1.5 Million Robot at Loyola Cuts Risk of Drug Errors…
(hat tip: Medical Quack)

2011年5月16日 星期一

The new tech bubble




Silicon Valley and the technology industry

The new tech bubble

Irrational exuberance has returned to the internet world. Investors should beware

SOME time after the dotcom boom turned into a spectacular bust in 2000, bumper stickers began appearing in Silicon Valley imploring: “Please God, just one more bubble.” That wish has now been granted. Compared with the rest of America, Silicon Valley feels like a boomtown. Corporate chefs are in demand again, office rents are soaring and the pay being offered to talented folk in fashionable fields like data science is reaching Hollywood levels. And no wonder, given the prices now being put on web companies.

Facebook and Twitter are not listed, but secondary-market trades value them at some $76 billion (more than Boeing or Ford) and $7.7 billion respectively. This week LinkedIn, a social network for professionals, said it hopes to be valued at up to $3.3 billion in an initial public offering (IPO). The next day Microsoft announced its purchase of Skype, an internet calling and video service, for a frothy-looking $8.5 billion—ten times its sales last year and 400 times its operating income. And those are all big-brand companies with customers around the world. Prices look even more excessive for fledgling firms in the private market (Color, a photo-sharing social network, was recently said to be worth $100m, even though it has an untested service) or for anything involving China. There has been a stampede for shares in Renren, hailed as “China’s Facebook”, and other Chinese web giants listed on American exchanges.

Same again, only different

So is history indeed about to repeat itself? Those who think not point out that the tech landscape has changed dramatically since the late 1990s. Back then few people were plugged into the internet; today there are 2 billion netizens, many of them in huge new wired markets such as China. A dozen years ago ultra-fast broadband connections were rare; today they are ubiquitous. And last time many start-ups (remember Webvan and Pets.com) had massive ambitions but puny revenues; today web stars such as Groupon, which offers its users online coupons, and Zynga, a social-gaming company, have phenomenal sales and already make respectable profits.

The this-time-it’s-different brigade also points out that the 1990s bubble expanded only after numerous web firms were floated on stockmarkets and naive investors pumped up the price of their shares to insane levels. This time, there have been relatively few big internet IPOs (though that is likely to change). And there is no sign of the widespread mania in the high-tech world that occurred last time around: the NASDAQ stockmarket index, a bellwether for the tech industry, has been rising but is still far below its peak of March 2000.

In one respect the optimists are right. This time is indeed different, though not because the boom-and-bust cycle has miraculously disappeared. It is different because the tech bubble-in-the-making is forming largely out of sight in private markets and has a global dimension that its predecessor lacked.

The bubble is being pumped partly by wealthy “angel” investors, some of whom made their fortunes in the late-1990s IPO boom. Their financial firepower has increased and they are battling one another for stakes in web start-ups (see article). In some cases angels are skimping on due diligence to win deals. When it comes to investing in more established companies like Facebook and the bigger web firms, traditional venture capitalists now face competition from private-equity companies and bank-led funds hunting for profits in a bleak investment environment. Gucci-shod leveraged-buy-out kings may appear to be more sophisticated than the waitresses buying dotcom shares a decade ago—but many of the newcomers are no more knowledgeable about technology.

This boom also has wider horizons than the previous one. It was arguably started by Russian investors. Skype was born in Estonia. Finland’s Rovio, which makes the popular Angry Birds smartphone game, recently raised $42m. And then there’s China. Renren and Youku, “China’s YouTube”, supposedly offer investors a chance to profit both from the country’s extraordinary growth and from the broader impact of the internet on commerce and society. Chinese web start-ups often command $15m-20m valuations in early financing rounds, far more than their peers in America.

These differences will have important consequences. The first is that the bubble forming in the private market could be pretty big by the time it floats into the public one. Facebook may turn out to be the next Google, and LinkedIn has a fairly solid revenue plan. But they will be followed by less robust outfits—the Facebook and LinkedIn wannabes—with prices that have been dangerously inflated by the angels’ antics.

The froth in China’s web industry could also lead to unrealistic valuations elsewhere. And it may be China that causes the web bubble eventually to burst. Few of those rushing to buy Chinese shares have thought through the political risks these companies face because of the sensitivity of their content. A clampdown on a prominent web firm could startle investors and prompt a broader sell-off, as could a financial scandal.

And after the angels have fallen?

With luck the latest web bubble will do less damage than its predecessor. In the 1990s internet euphoria caused a dramatic inflation in the price of telecoms firms, which were creating the infrastructure for the web. When internet firms’ share prices plummeted, telecoms investors suffered too. So far, there has been no sign of such a spillover effect this time around. But the globalisation of the internet industry means that many more people could be tempted to dabble in web stocks in the current boom, adding to the pain of the bust.

When will that be? This paper warned about both the last internet bubble and the American property bubble long before they burst. Irrational exuberance rarely gives way to rational scepticism quickly. So some bets on start-ups now will pay off. But investors should take a great deal of care when it comes to picking firms to back: they cannot just rely on somebody else paying even more later. And they might want to put another bumper sticker on their cars: “Thanks, God. Now give me the wisdom to sell before it’s too late.”

2011年5月14日 星期六

咖啡渣製品

約10年前紐約的咖啡渣製品是做成燃燒棒


咖啡渣製衣 除臭快乾防紫外線
紡織工廠利用咖啡渣製成科技咖啡紗,做成衣服後可除臭、抗紫外線。 (記者蔡百靈攝)

〔記 者蔡百靈/新北報導〕咖啡渣也能改變人們的生活?新北市產業園區一家紡織工廠「興采實業公司」研發出以回收的咖啡渣製成「科技咖啡紗」,有除臭功效,同時 具備速乾、紫外線防護性能,興采公司三年前開始申請「全球唯一咖啡紗發明專利」,今年初獲得台灣及中國認證,另也加快腳步向歐、美、日等多個國家申請專利 中。

興采實業公司總經理兼董事長陳國欽說,有一天他和太太一起喝咖啡,興起了研究咖啡渣的念頭,發現咖啡渣除了可以做成除臭包,還可以成為肥皂、面膜等,更可以是布料的一部分,於是公司二○○五年開始投入咖啡紗研發,如今已經生產第八代。

興采實業公司利用咖啡店和便利超商的廢棄咖啡渣,抽取其中油脂、雜質,經過奈米化過程產生母粒,再製成環保科技咖啡紗及布料,每三杯咖啡可做成一件衣服。

該公司研發的「環保節能衫」,布料採用回收咖啡渣、保特瓶和來自花蓮玉石做成的紗線織布,穿起來透氣、吸濕又有冰涼感。如果把咖啡渣添加到纖維紗線裡,就有快乾、吸附異味、抗紫外線的功能。運用相同原理,做出毛巾、寢具、襪子等都不是問題。

2011年5月11日 星期三

應對中國等國網上審查,美國巨資開發新技術

應對中國等國網上審查,美國巨資開發新技術

華盛頓

據法新社報導,美國將投入巨額資金,用於研發突破中國和伊朗等國家互聯網新聞封鎖的新型技術。負責人權事務的美國助理國務卿波斯納(Michael Posner)當地時間週二(5月10日)在華盛頓宣布,美國將撥款1900萬歐元,開發能繞過中、伊和其他威權國家網上監控的技術系統。新技術將有能力辨認互聯網中在某些國家遭審查的信息,有目標地使這些信息突破封鎖,進入這些國家。波斯納指出,發展突破網絡屏蔽技術如同貓與老鼠之間的遊戲,美國的目標是,讓“貓總能領先一步”。人權組織“記者無疆界”的一名發言人對美國政府的決定表示歡迎。該組織指出,目前,中國拘押了77名網上活躍人士。中國通常限制國內網民進入那些與當局觀點不同的網頁,阻止這些網頁傳播敏感內容,例如,達賴喇嘛、在中國遭禁的法輪功以及1989年的“六四”事件。最近,在美國國務卿希拉里·克林頓作了關於互聯網新聞封鎖的演講後,中國有關當局屏蔽了“希拉里·克林頓”這樣的搜索詞;會讓人聯想起阿拉伯世界革命的“茉莉花”也被列入遭屏蔽搜索詞名單。

2011年5月10日 星期二

垂直軸風車( 關 和市、牛山 泉)

垂直軸風車

作者 : 關和市、牛山泉;林輝政 審定

精通系列v.04

出版時間 : 2011年4月

出版單位 : 國立臺灣大學出版中心

裝訂 : 平裝

語言 : 中文

ISBN : 978-986-02-7515-5

定價 : 350元


本書介紹  
 

  《垂直軸風車》為第一本在日本發行有關垂直軸風車的書,也是世界上少有的有關垂直軸風車的書,作者關和市教授和牛山泉教授從1970年代初期便專心研究垂直軸風車,伴隨著地球暖化,環境問題浮上檯面,化石燃料資源的枯竭等問題,未來對風力或太陽光等可再生能源的利用,將是必須加速與強化面對的重要課題。而氣流不穩定的山丘與城市區域,垂直軸風車是相當適合應用在風向變動劇烈的地形中。

本書的出版除了呼應能源相關領域的大學生、研究生,企業、研究機構等的研究人員、技術人員、經營管理部門的執行人員外,也能提供社會團體、能源企劃、政策決策人員等的需求。本書不僅提出垂直軸的理論內容,也紀錄下製作風機的經驗所得到的設計方法或運用訣竅,提出未來大量應用垂直軸風車的可能性。

尤其是本書的第6章、第7章、第8章、第9章、以及第10章,為關和市歷經30年專心研究開發直線葉片垂直軸風車的研究大成,自負可讓此書成為極為有用的技術書。

本書介紹  

 


◎作者介紹
關 和市 (工學博士)
1963年:東海大學宇宙航空研究所航空力學組
1991年:東海大學開發技術研究所 教授
1997年:東海大學綜合科學技術研究所 教授
2006年:台灣.明道大學能源開發研究中心 教授
其他研究領域:次音速、穿音速、超音速及倍音速空氣力學,人力飛機,高大隧道換氣,飛翔物體、行走物體、結構物體等的應用空氣力學,能源轉換工學,風車工學,風力發電系統。
日本大學理工學院兼任講師
NEDO風力綜合調查委員會委員,NEF評議員,日本風力能源協會會長。
著 作:風力発電Q&A(学献社, 2002)

牛山 泉 (工學博士)
1971年:上智大學大學院理工學研究科博士課程畢業
現 在:足利工業大學副校長,大學院工學研究科教授,同學綜合研究中心長官。
中國.浙江工業大學客座教授,台灣.明道大學客座教授,上智大學、慶應義塾大學、國土交通大學、JICA筑波國際研修中心等的兼任講師。
專攻能源轉換工學。
著 作:小型風車ハンドブック(パワー社,1980)
手作り風車ガイド(パワー社,1995)
さわやかエネルギー風車入門(三省堂,1991)
風車工学入門(森北出版,2002)
風力エネルギーの基礎(オーム社,2005)

◎審定者介紹
林輝政 博士
國立台灣大學工程科學及海洋工程系教授
國立澎湖科技大學校長

2011年5月5日 星期四

Intel hails revolution in 3D chip technology?

2011年05月05日 06:21 AM
英特爾稱芯片設計取得重大突破
Intel hails revolution in 3D chip technology
英國《金融時報》 理查德•沃特斯克里斯•納托爾舊金山報導




Intel has claimed the biggest breakthrough in microprocessor ​​design in more than 50 years, potentially raising the stakes significantly for rivals in the increasingly capital-intensive global chip industry.


英特爾(Intel)宣布實現了微處理器設計在50多年裡的最重大突破,對於其競爭對手而言,這在日益資本密集型的​​全球芯片行業可能大大增加了賭注。

The world's biggest chipmaker said on Wednesday that it would begin producing chips later this year using a revolutionary 3D technology that has been nearly a decade in the making, and which it said would act as the foundation for generations of computing advances to come.


這家全球最大芯片製造商周三表示,今年晚些時候將開始利用革命性的3D技術生產芯片,這項技術開發了近10年之久,英特爾表示,它將作為未來數代運算能力提升的基礎

The new technology represents one of Intel's biggest gambles in the race to maintain and even extend its long-standing lead over other chipmakers in making chips smaller and faster, while breathing fresh life into the remorseless cycle of chip improvements on which the modern computing and electronics industries are founded.


在讓芯片變得更小、運行速度更快,同時為現代電腦和電子工業據以生存的無情的芯片改進週期注入新的生命力方面,這項新技術代表著英特爾在保持——甚至擴大——其相對於其它芯片製造商的長期優勢的競賽中的最大賭注之一。

The impact of Intel's attempt to push ahead of the rest of the industry was felt more widely on Wednesday, as Applied Materials, which supplies Intel with manufacturing equipment, announced a $4.9bn acquisition to keep up with the new technology.


週三,市場更廣泛感受到了英特爾超越業內其它廠家的努力的影響,向英特爾供應製造設備的應用材料公司(Applied Materials)宣布了一項49億美元的收購交易,以跟上這項新技術。

The US equipment maker said it would buy Varian Semiconductor Equipment to give it the capability to handle chips of greater complexity than those whose circuits are only 22 billionths of a metre wide – the scale at which Intel said it would begin manufacturing before the end of this year.


這家美國設備製造商表示,將收購Varian Semiconductor Equipment,以具備處理比目前22納米芯片更複雜的芯片的能力,英特爾表示,將在今年底之前開始生產這種芯片。

Intel called its new chip design the most significant advance since the introduction in the 1950s of the silicon transistor, the building block in electronics. It said the breakthrough would also extend Moore's Law – the accurate 1965 prediction by Intel co-founder Gordon Moore that the number of transistors on a chip could be doubled roughly every two years.


英特爾將其最新芯片設計稱為自上世紀50年代矽晶體管問世以來最為重大的進步,晶體管是電子系統的基本構件。該公司表示,這項技術突破還將延長摩爾定律(Moore's Law),這是英特爾聯合創始人戈登•摩爾(Gordon Moore)1965年做出的一項準確預測,他認為,芯片上晶體管的數量每兩年就可增加約一倍。

That exponential rise in processing power has formed the basis for the steady advances in electronics since, though many in the industry fear that the chipmakers are approaching the limits of their ability to continue the improvements.


處理能力的指數級提升成為此後電子領域穩步發展的基礎,儘管該行業很多人擔心,芯片製造商正接近不斷改進的能力極限。