The Optimism BiasBy Tali Sharot Saturday, May 28, 2011
We like to think of ourselves as rational creatures. We watch our backs, weigh the odds, pack an umbrella. But both neuroscience and social science suggest that we are more optimistic than realistic. On average, we expect things to turn out better than they wind up being. People hugely underestimate their chances of getting divorced, losing their job or being diagnosed with cancer; expect their children to be extraordinarily gifted; envision themselves achieving more than their peers; and overestimate their likely life span (sometimes by 20 years or more).
The belief that the future will be much better than the past and present is known as the optimism bias. It abides in every race, region and socioeconomic bracket. Schoolchildren playing when-I-grow-up are rampant optimists, but so are grownups: a 2005 study found that adults over 60 are just as likely to see the glass half full as young adults.
You might expect optimism to erode under the tide of news about violent conflicts, high unemployment, tornadoes and floods and all the threats and failures that shape human life. Collectively we can grow pessimistic — about the direction of our country or the ability of our leaders to improve education and reduce crime. But private optimism, about our personal future, remains incredibly resilient. A survey conducted in 2007 found that while 70% thought families in general were less successful than in their parents' day, 76% of respondents were optimistic about the future of their own family. (See "The Case for Optimism" in TIME's special: 10 Ideas That Will Change the World.)
Overly positive assumptions can lead to disastrous miscalculations — make us less likely to get health checkups, apply sunscreen or open a savings account, and more likely to bet the farm on a bad investment. But the bias also protects and inspires us: it keeps us moving forward rather than to the nearest high-rise ledge. Without optimism, our ancestors might never have ventured far from their tribes and we might all be cave dwellers, still huddled together and dreaming of light and heat.
To make progress, we need to be able to imagine alternative realities — better ones — and we need to believe that we can achieve them. Such faith helps motivate us to pursue our goals. Optimists in general work longer hours and tend to earn more. Economists at Duke University found that optimists even save more. And although they are not less likely to divorce, they are more likely to remarry — an act that is, as Samuel Johnson wrote, the triumph of hope over experience. (See if the global "happiness" index will ever beat out the GDP.)
Even if that better future is often an illusion, optimism has clear benefits in the present. Hope keeps our minds at ease, lowers stress and improves physical health. Researchers studying heart-disease patients found that optimists were more likely than nonoptimistic patients to take vitamins, eat low-fat diets and exercise, thereby reducing their overall coronary risk. A study of cancer patients revealed that pessimistic patients under the age of 60 were more likely to die within eight months than nonpessimistic patients of the same initial health, status and age.
In fact, a growing body of scientific evidence points to the conclusion that optimism may be hardwired by evolution into the human brain. The science of optimism, once scorned as an intellectually suspect province of pep rallies and smiley faces, is opening a new window on the workings of human consciousness. What it shows could fuel a revolution in psychology, as the field comes to grips with accumulating evidence that our brains aren't just stamped by the past. They are constantly being shaped by the future.
Hardwired for Hope?
I would have liked to tell you that my work on optimism grew out of a keen interest in the positive side of human nature. The reality is that I stumbled onto the brain's innate optimism by accident. After living through Sept. 11, 2001, in New York City, I had set out to investigate people's memories of the terrorist attacks. I was intrigued by the fact that people felt their memories were as accurate as a videotape, while often they were filled with errors. A survey conducted around the country showed that 11 months after the attacks, individuals' recollections of their experience that day were consistent with their initial accounts (given in September 2011) only 63% of the time. They were also poor at remembering details of the event, such as the names of the airline carriers. Where did these mistakes in memory come from?
Scientists who study memory proposed an intriguing answer: memories are susceptible to inaccuracies partly because the neural system responsible for remembering episodes from our past might not have evolved for memory alone. Rather, the core function of the memory system could in fact be to imagine the future — to enable us to prepare for what has yet to come. The system is not designed to perfectly replay past events, the researchers claimed. It is designed to flexibly construct future scenarios in our minds. As a result, memory also ends up being a reconstructive process, and occasionally, details are deleted and others inserted. (See why happiness isn't always good.)
To test this, I decided to record the brain activity of volunteers while they imagined future events — not events on the scale of 9/11, but events in their everyday lives — and compare those results with the pattern I observed when the same individuals recalled past events. But something unexpected occurred. Once people started imagining the future, even the most banal life events seemed to take a dramatic turn for the better. Mundane scenes brightened with upbeat details as if polished by a Hollywood script doctor. You might think that imagining a future haircut would be pretty dull. Not at all. Here is what one of my participants pictured: "I was getting my hair cut to donate to Locks of Love [a charity that fashions wigs for young cancer patients]. It had taken me years to grow it out, and my friends were all there to help celebrate. We went to my favorite hair place in Brooklyn and then went to lunch at our favorite restaurant."
I asked another participant to imagine a plane ride. "I imagined the takeoff — my favorite! — and then the eight-hour-long nap in between and then finally landing in Krakow and clapping for the pilot for providing the safe voyage," she responded. No tarmac delays, no screaming babies. The world, only a year or two into the future, was a wonderful place to live in.
If all our participants insisted on thinking positively when it came to what lay in store for them personally, what does that tell us about how our brains are wired? Is the human tendency for optimism a consequence of the architecture of our brains? (See the new science of happiness.)
The Human Time Machine
To think positively about our prospects, we must first be able to imagine ourselves in the future. Optimism starts with what may be the most extraordinary of human talents: mental time travel, the ability to move back and forth through time and space in one's mind. Although most of us take this ability for granted, our capacity to envision a different time and place is in fact critical to our survival.
It is easy to see why cognitive time travel was naturally selected for over the course of evolution. It allows us to plan ahead, to save food and resources for times of scarcity and to endure hard work in anticipation of a future reward. It also lets us forecast how our current behavior may influence future generations. If we were not able to picture the world in a hundred years or more, would we be concerned with global warming? Would we attempt to live healthily? Would we have children?
While mental time travel has clear survival advantages, conscious foresight came to humans at an enormous price — the understanding that somewhere in the future, death awaits. Ajit Varki, a biologist at the University of California, San Diego, argues that the awareness of mortality on its own would have led evolution to a dead end. The despair would have interfered with our daily function, bringing the activities needed for survival to a stop. The only way conscious mental time travel could have arisen over the course of evolution is if it emerged together with irrational optimism. Knowledge of death had to emerge side by side with the persistent ability to picture a bright future.
The capacity to envision the future relies partly on the hippocampus, a brain structure that is crucial to memory. Patients with damage to their hippocampus are unable to recollect the past, but they are also unable to construct detailed images of future scenarios. They appear to be stuck in time. The rest of us constantly move back and forth in time; we might think of a conversation we had with our spouse yesterday and then immediately of our dinner plans for later tonight.
But the brain doesn't travel in time in a random fashion. It tends to engage in specific types of thoughts. We consider how well our kids will do in life, how we will obtain that sought-after job, afford that house on the hill and find perfect love. We imagine our team winning the crucial game, look forward to an enjoyable night on the town or picture a winning streak at the blackjack table. We also worry about losing loved ones, failing at our job or dying in a terrible plane crash — but research shows that most of us spend less time mulling over negative outcomes than we do over positive ones. When we do contemplate defeat and heartache, we tend to focus on how these can be avoided. (See 20 ways to get and stay happy.)
Findings from a study I conducted a few years ago with prominent neuroscientist Elizabeth Phelps suggest that directing our thoughts of the future toward the positive is a result of our frontal cortex's communicating with subcortical regions deep in our brain. The frontal cortex, a large area behind the forehead, is the most recently evolved part of the brain. It is larger in humans than in other primates and is critical for many complex human functions such as language and goal setting.
Using a functional magnetic resonance imaging (fMRI) scanner, we recorded brain activity in volunteers as they imagined specific events that might occur to them in the future. Some of the events that I asked them to imagine were desirable (a great date or winning a large sum of money), and some were undesirable (losing a wallet, ending a romantic relationship). The volunteers reported that their images of sought-after events were richer and more vivid than those of unwanted events.
This matched the enhanced activity we observed in two critical regions of the brain: the amygdala, a small structure deep in the brain that is central to the processing of emotion, and the rostral anterior cingulate cortex (rACC), an area of the frontal cortex that modulates emotion and motivation. The rACC acts like a traffic conductor, enhancing the flow of positive emotions and associations. The more optimistic a person was, the higher the activity in these regions was while imagining positive future events (relative to negative ones) and the stronger the connectivity between the two structures. (See "Do We need $75,000 a Year to Be Happy?")
The findings were particularly fascinating because these precise regions — the amygdala and the rACC — show abnormal activity in depressed individuals. While healthy people expect the future to be slightly better than it ends up being, people with severe depression tend to be pessimistically biased: they expect things to be worse than they end up being. People with mild depression are relatively accurate when predicting future events. They see the world as it is. In other words, in the absence of a neural mechanism that generates unrealistic optimism, it is possible all humans would be mildly depressed.
Can Optimism Change Reality?
The problem with pessimistic expectations, such as those of the clinically depressed, is that they have the power to alter the future; negative expectations shape outcomes in a negative way. How do expectations change reality?
To answer this question, my colleague, cognitive neuroscientist Sara Bengtsson, devised an experiment in which she manipulated positive and negative expectations of students while their brains were scanned and tested their performance on cognitive tasks. To induce expectations of success, she primed college students with words such as smart, intelligent and clever just before asking them to perform a test. To induce expectations of failure, she primed them with words like stupid and ignorant. The students performed better after being primed with an affirmative message.
Examining the brain-imaging data, Bengtsson found that the students' brains responded differently to the mistakes they made depending on whether they were primed with the word clever or the word stupid. When the mistake followed positive words, she observed enhanced activity in the anterior medial part of the prefrontal cortex (a region that is involved in self-reflection and recollection). However, when the participants were primed with the word stupid, there was no heightened activity after a wrong answer. It appears that after being primed with the word stupid, the brain expected to do poorly and did not show signs of surprise or conflict when it made an error. (See how playing the part of an optimist can help your health.)
A brain that doesn't expect good results lacks a signal telling it, "Take notice — wrong answer!" These brains will fail to learn from their mistakes and are less likely to improve over time. Expectations become self-fulfilling by altering our performance and actions, which ultimately affects what happens in the future. Often, however, expectations simply transform the way we perceive the world without altering reality itself. Let me give you an example. While writing these lines, my friend calls. He is at Heathrow Airport waiting to get on a plane to Austria for a skiing holiday. His plane has been delayed for three hours already, because of snowstorms at his destination. "I guess this is both a good and bad thing," he says. Waiting at the airport is not pleasant, but he quickly concludes that snow today means better skiing conditions tomorrow. His brain works to match the unexpected misfortune of being stuck at the airport to its eager anticipation of a fun getaway.
A canceled flight is hardly tragic, but even when the incidents that befall us are the type of horrific events we never expected to encounter, we automatically seek evidence confirming that our misfortune is a blessing in disguise. No, we did not anticipate losing our job, being ill or getting a divorce, but when these incidents occur, we search for the upside. These experiences mature us, we think. They may lead to more fulfilling jobs and stable relationships in the future. Interpreting a misfortune in this way allows us to conclude that our sunny expectations were correct after all — things did work out for the best.
How do we find the silver lining in storm clouds? To answer that, my colleagues — renowned neuroscientist Ray Dolan and neurologist Tamara Shiner — and I instructed volunteers in the fMRI scanner to visualize a range of medical conditions, from broken bones to Alzheimer's, and rate how bad they imagined these conditions to be. Then we asked them: If you had to endure one of the following, which would you rather have — a broken leg or a broken arm? Heartburn or asthma? Finally, they rated all the conditions again. Minutes after choosing one particular illness out of many, the volunteers suddenly found that the chosen illness was less intimidating. A broken leg, for example, may have been thought of as "terrible" before choosing it over some other malady. However, after choosing it, the subject would find a silver lining: "With a broken leg, I will be able to lie in bed watching TV, guilt-free." (See how self-help can stop negative thoughts.)
In our study, we also found that people perceived adverse events more positively if they had experienced them in the past. Recording brain activity while these reappraisals took place revealed that highlighting the positive within the negative involves, once again, a tête-à-tête between the frontal cortex and subcortical regions processing emotional value. While contemplating a mishap, like a broken leg, activity in the rACC modulated signals in a region called the striatum that conveyed the good and bad of the event in question — biasing activity in a positive direction.
It seems that our brain possesses the philosopher's stone that enables us to turn lead into gold and helps us bounce back to normal levels of well-being. It is wired to place high value on the events we encounter and put faith in its own decisions. This is true not only when forced to choose between two adverse options (such as selecting between two courses of medical treatment) but also when we are selecting between desirable alternatives. Imagine you need to pick between two equally attractive job offers. Making a decision may be a tiring, difficult ordeal, but once you make up your mind, something miraculous happens. Suddenly — if you are like most people — you view the chosen offer as better than you did before and conclude that the other option was not that great after all. According to social psychologist Leon Festinger, we re-evaluate the options postchoice to reduce the tension that arises from making a difficult decision between equally desirable options.
In a brain-imaging study I conducted with Ray Dolan and Benedetto De Martino in 2009, we asked subjects to imagine going on vacation to 80 different destinations and rate how happy they thought they would be in each place. We then asked them to select one destination from two choices that they had rated exactly the same. Would you choose Paris over Brazil? Finally, we asked them to imagine and rate all the destinations again. Seconds after picking between two destinations, people rated their selected destination higher than before and rated the discarded choice lower than before.
The brain-imaging data revealed that these changes were happening in the caudate nucleus, a cluster of nerve cells that is part of the striatum. The caudate has been shown to process rewards and signal their expectation. If we believe we are about to be given a paycheck or eat a scrumptious chocolate cake, the caudate acts as an announcer broadcasting to other parts of the brain, "Be ready for something good." After we receive the reward, the value is quickly updated. If there is a bonus in the paycheck, this higher value will be reflected in striatal activity. If the cake is disappointing, the decreased value will be tracked so that next time our expectations will be lower.
In our experiment, after a decision was made between two destinations, the caudate nucleus rapidly updated its signal. Before choosing, it might signal "thinking of something great" while imagining both Greece and Thailand. But after choosing Greece, it now broadcast "thinking of something remarkable!" for Greece and merely "thinking of something good" for Thailand. (See pictures of couples in love.)
True, sometimes we regret our decisions; our choices can turn out to be disappointing. But on balance, when you make a decision — even if it is a hypothetical choice — you will value it more and expect it to bring you pleasure.
This affirmation of our decisions helps us derive heightened pleasure from choices that might actually be neutral. Without this, our lives might well be filled with second-guessing. Have we done the right thing? Should we change our mind? We would find ourselves stuck, overcome by indecision and unable to move forward.
The Puzzle of Optimism
While the past few years have seen important advances in the neuroscience of optimism, one enduring puzzle remained. How is it that people maintain this rosy bias even when information challenging our upbeat forecasts is so readily available? Only recently have we been able to decipher this mystery, by scanning the brains of people as they process both positive and negative information about the future. The findings are striking: when people learn, their neurons faithfully encode desirable information that can enhance optimism but fail at incorporating unexpectedly undesirable information. When we hear a success story like Mark Zuckerberg's, our brains take note of the possibility that we too may become immensely rich one day. But hearing that the odds of divorce are almost 1 in 2 tends not to make us think that our own marriages may be destined to fail. (See "A Primer for Pessimists.")
Why would our brains be wired in this way? It is tempting to speculate that optimism was selected by evolution precisely because, on balance, positive expectations enhance the odds of survival. Research findings that optimists live longer and are healthier, plus the fact that most humans display optimistic biases — and emerging data that optimism is linked to specific genes — all strongly support this hypothesis. Yet optimism is also irrational and can lead to unwanted outcomes. The question then is, How can we remain hopeful — benefiting from the fruits of optimism — while at the same time guarding ourselves from its pitfalls?
I believe knowledge is key. We are not born with an innate understanding of our biases. The brain's illusions have to be identified by careful scientific observation and controlled experiments and then communicated to the rest of us. Once we are made aware of our optimistic illusions, we can act to protect ourselves. The good news is that awareness rarely shatters the illusion. The glass remains half full. It is possible, then, to strike a balance, to believe we will stay healthy, but get medical insurance anyway; to be certain the sun will shine, but grab an umbrella on our way out — just in case.
Adapted from The Optimism Bias, by Tali Sharot. Copyright © 2011 Tali Sharot. Reprinted with permission of Pantheon Books, a division of Random House Inc. All rights reserved
Sharot is a research fellow at University College London's Wellcome Trust Centre for Neuroimaging
Science is recognising humans as a geological force to be reckoned with
May 26th 2011 | from the print edition
THE here and now are defined by astronomy and geology. Astronomy takes care of the here: a planet orbiting a yellow star embedded in one of the spiral arms of the Milky Way, a galaxy that is itself part of the Virgo supercluster, one of millions of similarly vast entities dotted through the sky. Geology deals with the now: the 10,000-year-old Holocene epoch, a peculiarly stable and clement part of the Quaternary period, a time distinguished by regular shifts into and out of ice ages. The Quaternary forms part of the 65m-year Cenozoic era, distinguished by the opening of the North Atlantic, the rise of the Himalayas, and the widespread presence of mammals and flowering plants. This era in turn marks the most recent part of the Phanerozoic aeon, the 540m-year chunk of the Earth’s history wherein rocks with fossils of complex organisms can be found. The regularity of celestial clockwork and the solid probity of rock give these co-ordinates a reassuring constancy.
Now there is a movement afoot to change humanity’s co-ordinates. In 2000 Paul Crutzen, an eminent atmospheric chemist, realised he no longer believed he was living in the Holocene. He was living in some other age, one shaped primarily by people. From their trawlers scraping the floors of the seas to their dams impounding sediment by the gigatonne, from their stripping of forests to their irrigation of farms, from their mile-deep mines to their melting of glaciers, humans were bringing about an age of planetary change. With a colleague, Eugene Stoermer, Dr Crutzen suggested this age be called the Anthropocene—“the recent age of man”.
The term has slowly picked up steam, both within the sciences (the International Commission on Stratigraphy, ultimate adjudicator of the geological time scale, is taking a formal interest) and beyond. This May statements on the environment by concerned Nobel laureates and the Pontifical Academy of Sciences both made prominent use of the term, capitalising on the way in which it dramatises the sheer scale of human activity.
- Forest conservation: LidartectorMay 26th 2011
The advent of the Anthropocene promises more, though, than a scientific nicety or a new way of grabbing the eco-jaded public’s attention. The term “paradigm shift” is bandied around with promiscuous ease. But for the natural sciences to make human activity central to its conception of the world, rather than a distraction, would mark such a shift for real. For centuries, science has progressed by making people peripheral. In the 16th century Nicolaus Copernicus moved the Earth from its privileged position at the centre of the universe. In the 18th James Hutton opened up depths of geological time that dwarf the narrow now. In the 19th Charles Darwin fitted humans onto a single twig of the evolving tree of life. As Simon Lewis, an ecologist at the University of Leeds, points out, embracing the Anthropocene as an idea means reversing this trend. It means treating humans not as insignificant observers of the natural world but as central to its workings, elemental in their force.
The most common way of distinguishing periods of geological time is by means of the fossils they contain. On this basis picking out the Anthropocene in the rocks of days to come will be pretty easy. Cities will make particularly distinctive fossils. A city on a fast-sinking river delta (and fast-sinking deltas, undermined by the pumping of groundwater and starved of sediment by dams upstream, are common Anthropocene environments) could spend millions of years buried and still, when eventually uncovered, reveal through its crushed structures and weird mixtures of materials that it is unlike anything else in the geological record.
The fossils of living creatures will be distinctive, too. Geologists define periods through assemblages of fossil life reliably found together. One of the characteristic markers of the Anthropocene will be the widespread remains of organisms that humans use, or that have adapted to life in a human-dominated world. According to studies by Erle Ellis, an ecologist at the University of Maryland, Baltimore County, the vast majority of ecosystems on the planet now reflect the presence of people. There are, for instance, more trees on farms than in wild forests. And these anthropogenic biomes are spread about the planet in a way that the ecological arrangements of the prehuman world were not. The fossil record of the Anthropocene will thus show a planetary ecosystem homogenised through domestication.
More sinisterly, there are the fossils that will not be found. Although it is not yet inevitable, scientists warn that if current trends of habitat loss continue, exacerbated by the effects of climate change, there could be an imminent and dramatic number of extinctions before long.
All these things would show future geologists that humans had been present. But though they might be diagnostic of the time in which humans lived, they would not necessarily show that those humans shaped their time in the way that people pushing the idea of the Anthropocene want to argue. The strong claim of those announcing the recent dawning of the age of man is that humans are not just spreading over the planet, but are changing the way it works.
Such workings are the province of Earth-system science, which sees the planet not just as a set of places, or as the subject of a history, but also as a system of forces, flows and feedbacks that act upon each other. This system can behave in distinctive and counterintuitive ways, including sometimes flipping suddenly from one state to another. To an Earth-system scientist the difference between the Quaternary period (which includes the Holocene) and the Neogene, which came before it, is not just what was living where, or what the sea level was; it is that in the Neogene the climate stayed stable whereas in the Quaternary it swung in and out of a series of ice ages. The Earth worked differently in the two periods.
The clearest evidence for the system working differently in the Anthropocene comes from the recycling systems on which life depends for various crucial elements. In the past couple of centuries people have released quantities of fossil carbon that the planet took hundreds of millions of years to store away. This has given them a commanding role in the planet’s carbon cycle.
Although the natural fluxes of carbon dioxide into and out of the atmosphere are still more than ten times larger than the amount that humans put in every year by burning fossil fuels, the human addition matters disproportionately because it unbalances those natural flows. As Mr Micawber wisely pointed out, a small change in income can, in the absence of a compensating change in outlays, have a disastrous effect. The result of putting more carbon into the atmosphere than can be taken out of it is a warmer climate, a melting Arctic, higher sea levels, improvements in the photosynthetic efficiency of many plants, an intensification of the hydrologic cycle of evaporation and precipitation, and new ocean chemistry.
All of these have knock-on effects both on people and on the processes of the planet. More rain means more weathering of mountains. More efficient photosynthesis means less evaporation from croplands. And the changes in ocean chemistry are the sort of thing that can be expected to have a direct effect on the geological record if carbon levels rise far enough.
At a recent meeting of the Geological Society of London that was devoted to thinking about the Anthropocene and its geological record, Toby Tyrrell of the University of Southampton pointed out that pale carbonate sediments—limestones, chalks and the like—cannot be laid down below what is called a “carbonate compensation depth”. And changes in chemistry brought about by the fossil-fuel carbon now accumulating in the ocean will raise the carbonate compensation depth, rather as a warmer atmosphere raises the snowline on mountains. Some ocean floors which are shallow enough for carbonates to precipitate out as sediment in current conditions will be out of the game when the compensation depth has risen, like ski resorts too low on a warming alp. New carbonates will no longer be laid down. Old ones will dissolve. This change in patterns of deep-ocean sedimentation will result in a curious, dark band of carbonate-free rock—rather like that which is seen in sediments from the Palaeocene-Eocene thermal maximum, an episode of severe greenhouse warming brought on by the release of pent-up carbon 56m years ago.
No Dickensian insights are necessary to appreciate the scale of human intervention in the nitrogen cycle. One crucial part of this cycle—the fixing of pure nitrogen from the atmosphere into useful nitrogen-containing chemicals—depends more or less entirely on living things (lightning helps a bit). And the living things doing most of that work are now people (see chart). By adding industrial clout to the efforts of the microbes that used to do the job single-handed, humans have increased the annual amount of nitrogen fixed on land by more than 150%. Some of this is accidental. Burning fossil fuels tends to oxidise nitrogen at the same time. The majority is done on purpose, mostly to make fertilisers. This has a variety of unwholesome consequences, most importantly the increasing number of coastal “dead zones” caused by algal blooms feeding on fertiliser-rich run-off waters.
Industrial nitrogen’s greatest environmental impact, though, is to increase the number of people. Although nitrogen fixation is not just a gift of life—it has been estimated that 100m people were killed by explosives made with industrially fixed nitrogen in the 20th century’s wars—its net effect has been to allow a huge growth in population. About 40% of the nitrogen in the protein that humans eat today got into that food by way of artificial fertiliser. There would be nowhere near as many people doing all sorts of other things to the planet if humans had not sped the nitrogen cycle up.
It is also worth noting that unlike many of humanity’s other effects on the planet, the remaking of the nitrogen cycle was deliberate. In the late 19th century scientists diagnosed a shortage of nitrogen as a planet-wide problem. Knowing that natural processes would not improve the supply, they invented an artificial one, the Haber process, that could make up the difference. It was, says Mark Sutton of the Centre for Ecology and Hydrology in Edinburgh, the first serious human attempt at geoengineering the planet to bring about a desired goal. The scale of its success outstripped the imaginings of its instigators. So did the scale of its unintended consequences.
For many of those promoting the idea of the Anthropocene, further geoengineering may now be in order, this time on the carbon front. Left to themselves, carbon-dioxide levels in the atmosphere are expected to remain high for 1,000 years—more, if emissions continue to go up through this century. It is increasingly common to hear climate scientists arguing that this means things should not be left to themselves—that the goal of the 21st century should be not just to stop the amount of carbon in the atmosphere increasing, but to start actively decreasing it. This might be done in part by growing forests (see article) and enriching soils, but it might also need more high-tech interventions, such as burning newly grown plant matter in power stations and pumping the resulting carbon dioxide into aquifers below the surface, or scrubbing the air with newly contrived chemical-engineering plants, or intervening in ocean chemistry in ways that would increase the sea’s appetite for the air’s carbon.
To think of deliberately interfering in the Earth system will undoubtedly be alarming to some. But so will an Anthropocene deprived of such deliberation. A way to try and split the difference has been propounded by a group of Earth-system scientists inspired by (and including) Dr Crutzen under the banner of “planetary boundaries”. The planetary-boundaries group, which published a sort of manifesto in 2009, argues for increased restraint and, where necessary, direct intervention aimed at bringing all sorts of things in the Earth system, from the alkalinity of the oceans to the rate of phosphate run-off from the land, close to the conditions pertaining in the Holocene. Carbon-dioxide levels, the researchers recommend, should be brought back from whatever they peak at to a level a little higher than the Holocene’s and a little lower than today’s.
The idea behind this precautionary approach is not simply that things were good the way they were. It is that the further the Earth system gets from the stable conditions of the Holocene, the more likely it is to slip into a whole new state and change itself yet further.
The Earth’s history shows that the planet can indeed tip from one state to another, amplifying the sometimes modest changes which trigger the transition. The nightmare would be a flip to some permanently altered state much further from the Holocene than things are today: a hotter world with much less productive oceans, for example. Such things cannot be ruled out. On the other hand, the invocation of poorly defined tipping points is a well worn rhetorical trick for stirring the fears of people unperturbed by current, relatively modest, changes.
In general, the goal of staying at or returning close to Holocene conditions seems judicious. It remains to be seen if it is practical. The Holocene never supported a civilisation of 10 billion reasonably rich people, as the Anthropocene must seek to do, and there is no proof that such a population can fit into a planetary pot so circumscribed. So it may be that a “good Anthropocene”, stable and productive for humans and other species they rely on, is one in which some aspects of the Earth system’s behaviour are lastingly changed. For example, the Holocene would, without human intervention, have eventually come to an end in a new ice age. Keeping the Anthropocene free of ice ages will probably strike most people as a good idea.
That is an extreme example, though. No new ice age is due for some millennia to come. Nevertheless, to see the Anthropocene as a blip that can be minimised, and from which the planet, and its people, can simply revert to the status quo, may be to underestimate the sheer scale of what is going on.
Take energy. At the moment the amount of energy people use is part of what makes the Anthropocene problematic, because of the carbon dioxide given off. That problem will not be solved soon enough to avert significant climate change unless the Earth system is a lot less prone to climate change than most scientists think. But that does not mean it will not be solved at all. And some of the zero-carbon energy systems that solve it—continent- scale electric grids distributing solar energy collected in deserts, perhaps, or advanced nuclear power of some sort—could, in time, be scaled up to provide much more energy than today’s power systems do. As much as 100 clean terawatts, compared to today’s dirty 15TW, is not inconceivable for the 22nd century. That would mean humanity was producing roughly as much useful energy as all the world’s photosynthesis combined.
In a fascinating recent book, “Revolutions that Made the Earth”, Timothy Lenton and Andrew Watson, Earth-system scientists at the universities of Exeter and East Anglia respectively, argue that large changes in the amount of energy available to the biosphere have, in the past, always marked large transitions in the way the world works. They have a particular interest in the jumps in the level of atmospheric oxygen seen about 2.4 billion years ago and 600m years ago. Because oxygen is a particularly good way of getting energy out of organic matter (if it weren’t, there would be no point in breathing) these shifts increased sharply the amount of energy available to the Earth’s living things. That may well be why both of those jumps seem to be associated with subsequent evolutionary leaps—the advent of complex cells, in the first place, and of large animals, in the second. Though the details of those links are hazy, there is no doubt that in their aftermath the rules by which the Earth system operated had changed.
The growing availability of solar or nuclear energy over the coming centuries could mark the greatest new energy resource since the second of those planetary oxidations, 600m years ago—a change in the same class as the greatest the Earth system has ever seen. Dr Lenton (who is also one of the creators of the planetary-boundaries concept) and Dr Watson suggest that energy might be used to change the hydrologic cycle with massive desalination equipment, or to speed up the carbon cycle by drawing down atmospheric carbon dioxide, or to drive new recycling systems devoted to tin and copper and the many other metals as vital to industrial life as carbon and nitrogen are to living tissue. Better to embrace the Anthropocene’s potential as a revolution in the way the Earth system works, they argue, than to try to retreat onto a low-impact path that runs the risk of global immiseration.
Such a choice is possible because of the most fundamental change in Earth history that the Anthropocene marks: the emergence of a form of intelligence that allows new ways of being to be imagined and, through co-operation and innovation, to be achieved. The lessons of science, from Copernicus to Darwin, encourage people to dismiss such special pleading. So do all manner of cultural warnings, from the hubris around which Greek tragedies are built to the lamentation of King David’s preacher: “Vanity of vanities, all is vanity…the Earth abideth for ever…and there is no new thing under the sun.” But the lamentation of vanity can be false modesty. On a planetary scale, intelligence is something genuinely new and powerful. Through the domestication of plants and animals intelligence has remade the living environment. Through industry it has disrupted the key biogeochemical cycles. For good or ill, it will do yet more.
It may seem nonsense to think of the (probably sceptical) intelligence with which you interpret these words as something on a par with plate tectonics or photosynthesis. But dam by dam, mine by mine, farm by farm and city by city it is remaking the Earth before your eyes.
Anthropocene was originally coined by ecologist Eugene Stoermer but subsequently popularized by the Nobel Prize-winning scientist Paul Crutzen by analogy with the word "Holocene." The Greek roots are anthropo- meaning "human" and -cene meaning "new." Crutzen has explained, "I was at a conference where someone said something about the Holocene. I suddenly thought this was wrong. The world has changed too much. So I said: 'No, we are in the Anthropocene.' I just made up the word on the spur of the moment. Everyone was shocked. But it seems to have stuck." Crutzen first used it in print in a 2000 newsletter of the International Geosphere-Biosphere Programme (IGBP), No.41. In 2008, Zalasiewicz suggested in GSA Today that an anthropocene epoch is now appropriate.
- Natural Sciences
- Social and Human Sciences
Scientific Inference (Harold Jeffreys) 科學推斷
CHAPTER I LOGIC AND SCIENTIFIC INFERENCE The Master said, Yu, shall I tell you what knowledge is? When you know a thing, to know that you know it, ...
- [ 翻譯這個網頁 ]A scientific theory is originally based on a particular set of observations. How can it be extended to apply outside this original range of cases? ...
名著 The Grammar of Science 之 Grammar 不是"語法"
是"基本原理"意思 中國有一翻譯本或是 "科學典範"
fallacy of certainty 等等都將 fallacy 翻譯成"論"......
- 作者：(英)杰弗里 出版社：廈門大學出版社
Swisslog's pharmacy automation solutions offer complete automation from the packaging of bulk medications, to storage, dispensing, and logistics, as well as Inventory Management Software offering supply chain control from the dock to the patient, including 340B drug pricing. View Swisslog's North America Solutions for the inpatient pharmacy and our solutions for optimizing drug management operations.
Swisslog’s PillPick pharmacy automation system provides a comprehensive approach from unit dose packaging through medication dispensing. PillPick offers the ultimate automated pharmacy system providing patient safety, medication dispensing efficiency, and pharmacy inventory management.
Swisslog also offers BoxPicker, a high-density automated pharmacy warehouse for the storage and dispensing of medications, refrigerated medications, and supplies. BoxPicker is faster and more secure than vertical carousels.
MedRover™ to Debut at AONE Annual Meeting & Exhibition
DENVER, Colo. (April 5, 2011) – Swisslog, a leading provider of automated materials transport and medication management solutions for hospitals, today announced that its MedRover™ mobile dispensing cabinet will debut next week at the American Organization of Nursing Executives (AONE) Annual Meeting & Exhibition in San Diego.
ATP High-Speed Tablet Packager (North America)
| Swisslog’s |
Pharmacy Automation Systems
|Swisslog’s PillPick system bar-code packages, stores and dispenses unit dose medications. Unit doses are automatically placed by PillPicker, Swisslog’s pharmacy packaging unit, into bar-code labeled bags and sealed.|
Swisslog’s medication storage and dispensing unit, DrugNest, is a high-density pharmacy robot for automated storage and medication dispensing of bar-coded unit doses. Packaged, unit dose medications are loaded automatically from the PillPicker to the DrugNest without intermediate material handling. Pharmacy dedication dispensing is integrated to downstream pharmacy automation components including cassette filling and PickRing – Swisslog’s unique medication dispensing method.
Pharmacy Storage/Retrieval System
pharmacy. Visit our Hospital Pharmacy Drug Storage and Retrieval System page for more information on the benefits of BoxPicker for the pharmacy.
Swisslog also offers StockManager, a modular bar-coding solution with complete hospital pharmacy medication inventory management and automatic restocking ordering capability. Contact Swisslog Healthcare Solutions for more information.
Swisslog PillPick Robot Mixing It Up at Loyolaby Michael on Apr 28, 2008 • 10:50 am
Loyola University Hospital in Chicago has installed a robotic pharmacist on premises in an attempt to reduce the effect of human error from the pharmacy storage, packaging, and distribution system. The robot, dubbed PillPick, is produced by Swisslog of Buchs, Switzerland
The robot places single doses of medication in small plastic bags. Each bag has a bar code that identifies the drug. When the system is fully implemented, the nurse will scan the bar code on the medication bag, along with the bar code on the patient’s wrist band. If the computer detects it’s the wrong drug or wrong dose, a pop-up warning will appear and the computer will sound an alert.
Hospitals around the country are beginning to use robotics in the pharmacy. Loyola is the first hospital in the Midwest to use the most advanced system of its kind. It’s called PillPick,® manufactured by SwissLog Healthcare Solutions.
"We looked at five systems, and this one was the most innovative," said Richard Ricker, administrative director of the pharmacy department, Loyola.
The system is 28 feet long and 13 feet wide. At the front end, a robot arm packages medications in single-dose bags. At the back end, a patient’s medication bags are arranged in order of administration and attached to a plastic ring. A card attached to the ring specifies each drug, along with important patient information.
The robot packages 3,200 medications, including tablets, capsules, vials, ampules and suppositories. It works around the clock.
The robot is designed to eliminate the type of serious human error involving Quaid’s twins last November. The infants were supposed to receive 10 units per millimeter of the blood thinner Heparin. Instead they received 10,000 units. The 10-unit vials and 10,000-unit vials looked similar, and a pharmacy technician mistakenly placed them in the same drawer.
Product page: PillPick automated unit dose packaging, storage and dispensing system…
Press release: $1.5 Million Robot at Loyola Cuts Risk of Drug Errors…
(hat tip: Medical Quack)
Silicon Valley and the technology industry
Irrational exuberance has returned to the internet world. Investors should beware
May 12th 2011 | from the print edition
SOME time after the dotcom boom turned into a spectacular bust in 2000, bumper stickers began appearing in Silicon Valley imploring: “Please God, just one more bubble.” That wish has now been granted. Compared with the rest of America, Silicon Valley feels like a boomtown. Corporate chefs are in demand again, office rents are soaring and the pay being offered to talented folk in fashionable fields like data science is reaching Hollywood levels. And no wonder, given the prices now being put on web companies.
Facebook and Twitter are not listed, but secondary-market trades value them at some $76 billion (more than Boeing or Ford) and $7.7 billion respectively. This week LinkedIn, a social network for professionals, said it hopes to be valued at up to $3.3 billion in an initial public offering (IPO). The next day Microsoft announced its purchase of Skype, an internet calling and video service, for a frothy-looking $8.5 billion—ten times its sales last year and 400 times its operating income. And those are all big-brand companies with customers around the world. Prices look even more excessive for fledgling firms in the private market (Color, a photo-sharing social network, was recently said to be worth $100m, even though it has an untested service) or for anything involving China. There has been a stampede for shares in Renren, hailed as “China’s Facebook”, and other Chinese web giants listed on American exchanges.
- Internet businesses: Another digital gold rushMay 12th 2011
So is history indeed about to repeat itself? Those who think not point out that the tech landscape has changed dramatically since the late 1990s. Back then few people were plugged into the internet; today there are 2 billion netizens, many of them in huge new wired markets such as China. A dozen years ago ultra-fast broadband connections were rare; today they are ubiquitous. And last time many start-ups (remember Webvan and Pets.com) had massive ambitions but puny revenues; today web stars such as Groupon, which offers its users online coupons, and Zynga, a social-gaming company, have phenomenal sales and already make respectable profits.
The this-time-it’s-different brigade also points out that the 1990s bubble expanded only after numerous web firms were floated on stockmarkets and naive investors pumped up the price of their shares to insane levels. This time, there have been relatively few big internet IPOs (though that is likely to change). And there is no sign of the widespread mania in the high-tech world that occurred last time around: the NASDAQ stockmarket index, a bellwether for the tech industry, has been rising but is still far below its peak of March 2000.
In one respect the optimists are right. This time is indeed different, though not because the boom-and-bust cycle has miraculously disappeared. It is different because the tech bubble-in-the-making is forming largely out of sight in private markets and has a global dimension that its predecessor lacked.
The bubble is being pumped partly by wealthy “angel” investors, some of whom made their fortunes in the late-1990s IPO boom. Their financial firepower has increased and they are battling one another for stakes in web start-ups (see article). In some cases angels are skimping on due diligence to win deals. When it comes to investing in more established companies like Facebook and the bigger web firms, traditional venture capitalists now face competition from private-equity companies and bank-led funds hunting for profits in a bleak investment environment. Gucci-shod leveraged-buy-out kings may appear to be more sophisticated than the waitresses buying dotcom shares a decade ago—but many of the newcomers are no more knowledgeable about technology.
This boom also has wider horizons than the previous one. It was arguably started by Russian investors. Skype was born in Estonia. Finland’s Rovio, which makes the popular Angry Birds smartphone game, recently raised $42m. And then there’s China. Renren and Youku, “China’s YouTube”, supposedly offer investors a chance to profit both from the country’s extraordinary growth and from the broader impact of the internet on commerce and society. Chinese web start-ups often command $15m-20m valuations in early financing rounds, far more than their peers in America.
These differences will have important consequences. The first is that the bubble forming in the private market could be pretty big by the time it floats into the public one. Facebook may turn out to be the next Google, and LinkedIn has a fairly solid revenue plan. But they will be followed by less robust outfits—the Facebook and LinkedIn wannabes—with prices that have been dangerously inflated by the angels’ antics.
The froth in China’s web industry could also lead to unrealistic valuations elsewhere. And it may be China that causes the web bubble eventually to burst. Few of those rushing to buy Chinese shares have thought through the political risks these companies face because of the sensitivity of their content. A clampdown on a prominent web firm could startle investors and prompt a broader sell-off, as could a financial scandal.
With luck the latest web bubble will do less damage than its predecessor. In the 1990s internet euphoria caused a dramatic inflation in the price of telecoms firms, which were creating the infrastructure for the web. When internet firms’ share prices plummeted, telecoms investors suffered too. So far, there has been no sign of such a spillover effect this time around. But the globalisation of the internet industry means that many more people could be tempted to dabble in web stocks in the current boom, adding to the pain of the bust.
When will that be? This paper warned about both the last internet bubble and the American property bubble long before they burst. Irrational exuberance rarely gives way to rational scepticism quickly. So some bets on start-ups now will pay off. But investors should take a great deal of care when it comes to picking firms to back: they cannot just rely on somebody else paying even more later. And they might want to put another bumper sticker on their cars: “Thanks, God. Now give me the wisdom to sell before it’s too late.”
from the print edition | Leaders
〔記 者蔡百靈／新北報導〕咖啡渣也能改變人們的生活？新北市產業園區一家紡織工廠「興采實業公司」研發出以回收的咖啡渣製成「科技咖啡紗」，有除臭功效，同時 具備速乾、紫外線防護性能，興采公司三年前開始申請「全球唯一咖啡紗發明專利」，今年初獲得台灣及中國認證，另也加快腳步向歐、美、日等多個國家申請專利 中。
作者 : 關和市、牛山泉；林輝政 審定
出版時間 : 2011年4月
出版單位 : 國立臺灣大學出版中心
裝訂 : 平裝
語言 : 中文
ISBN : 978-986-02-7515-5
定價 : 350元
Intel hails revolution in 3D chip technology
Intel has claimed the biggest breakthrough in microprocessor design in more than 50 years, potentially raising the stakes significantly for rivals in the increasingly capital-intensive global chip industry.
The world's biggest chipmaker said on Wednesday that it would begin producing chips later this year using a revolutionary 3D technology that has been nearly a decade in the making, and which it said would act as the foundation for generations of computing advances to come.
The new technology represents one of Intel's biggest gambles in the race to maintain and even extend its long-standing lead over other chipmakers in making chips smaller and faster, while breathing fresh life into the remorseless cycle of chip improvements on which the modern computing and electronics industries are founded.
The impact of Intel's attempt to push ahead of the rest of the industry was felt more widely on Wednesday, as Applied Materials, which supplies Intel with manufacturing equipment, announced a $4.9bn acquisition to keep up with the new technology.
The US equipment maker said it would buy Varian Semiconductor Equipment to give it the capability to handle chips of greater complexity than those whose circuits are only 22 billionths of a metre wide – the scale at which Intel said it would begin manufacturing before the end of this year.
這家美國設備製造商表示，將收購Varian Semiconductor Equipment，以具備處理比目前22納米芯片更複雜的芯片的能力，英特爾表示，將在今年底之前開始生產這種芯片。
Intel called its new chip design the most significant advance since the introduction in the 1950s of the silicon transistor, the building block in electronics. It said the breakthrough would also extend Moore's Law – the accurate 1965 prediction by Intel co-founder Gordon Moore that the number of transistors on a chip could be doubled roughly every two years.
英特爾將其最新芯片設計稱為自上世紀50年代矽晶體管問世以來最為重大的進步，晶體管是電子系統的基本構件。該公司表示，這項技術突破還將延長摩爾定律(Moore's Law)，這是英特爾聯合創始人戈登•摩爾(Gordon Moore)1965年做出的一項準確預測，他認為，芯片上晶體管的數量每兩年就可增加約一倍。
That exponential rise in processing power has formed the basis for the steady advances in electronics since, though many in the industry fear that the chipmakers are approaching the limits of their ability to continue the improvements.