2009年10月27日 星期二

Factories gear up in race for e-readers

2. 香餑餑
注音一式 ㄒ|ㄤ ㄅㄛ ˙ㄅㄛ
漢語拼音 xi  b  bo 注音二式 shi ng b  bo
比喻受人喜愛的人或事物。紅樓夢˙第六十回:「他奶奶病了,他又成了香餑餑了,都搶不到手。」

分析:电子书阅读器为何成了香饽饽?


随着世界各地的爱书者纷纷转向数字图书,从旧金山到上海,全球对电子书阅读器的需求正迅猛增长。今年全球的电子书阅读器产量多达500万台,分析师们表示,2010年的产量可能会翻番。

订单的激增正促使供应商加大产能。为完成订单,制造商也正被迫重新调配厂房。

总部设在波士顿的E Ink公司处在此类设备制造周期的核心位置,绝大多数电子书阅读器的显示屏都是由他们生产的。据分析师们介绍,E Ink占据了约90%的市场份额,亚马逊(Amazon)的Kindle、索尼(Sony)的Reader和Barnes & Noble的Nook都使用该公司生产的显示屏。

Forrester Research的分析师莎拉•罗特曼•埃普斯(Sarah Rotman Epps)表示:“供应链集中于且始于一家公司。”他指的就是E Ink

E Ink今年下半年接到潮水般的订单,这家相对较小的公司被迫迅速扩张规模。本月,E Ink在波士顿附近开设了一家工厂。在过去的9个月里,该公司的收入已累计增长250%,达到9600万美元。

电子书阅读器在经济低迷时期的畅销,使之成为了一类特别抢眼的产品。在找到这样一类罕见的抗衰退热门产品后,生产商正对自身技术进行投资,电子产品制造商则正把厂房空地转作生产用途。

E Ink营销副总裁斯里•佩鲁文巴(Sri Peruvemba)表示:“(电子书阅读器)需求涌现之时,其它非电子书阅读器类产品的需求正每况愈下,因此(制造商手里)有闲置的厂房空地。”

这一点在台湾表现得最为明显,当地不少大型制造商已着手准备电子书阅读器的生产计划。这表明,这些公司相信此类设备将成为更广泛消费类电子行业的支柱。

上网本领域的先驱企业——华硕(Asus)今年2月就着手准备打入电子书阅读器市场,它计划最早在今年年底生产出第一款自有品牌的电子书阅读器。

纬创资通(Wistron)则在上月收购荷兰电子纸(e-paper)公司Polymer Vision时,表示有兴趣进军电子书阅读器市场。Polymer Vision专长于生产可弯曲、可卷动的显示屏。纬创资通原先是宏基(Acer)旗下的一家代工企业,2000年时从宏基分拆出来。

Polymer Vision首席执行官卡尔•麦戈德瑞克(Karl McGoldrick)表示,电子书阅读器“在潜在数量和实际可获得的内容上,已处于临界规模(critical mass);在成本价格上也具有吸引力”。他向英国《金融时报》表示:“当一个新行业证实了其发展潜力时,进入的时机往往已经成熟。”

增加投资的迹象,从台湾两大平板显示器生产商身上就可以见出。友达光电(AU Optronics)和奇美电子(Chi Mei Optoelectronics)均表示,已开始生产适用于电子书阅读器的显示面板和模块。

分析师称,友达光电和奇美电子进入这个市场后,可确保一个关键组件的稳定供应,因此能极大地促进电子书阅读器的批量生产。

“就硬件来说,技术方面万事俱备,”台湾产业情报研究所(MIC)资深产业分析师陈赐贤(David Chen)表示。MIC是一家具有政府背景的智库。

尽管电子阅读器目前尚不完善,但它仍一下子流行起来。不过,由于人们兴趣猛增,也促使E Ink等生产商加快创新,以满足消费者的期望。

E Ink目前只供应黑白显示器,但表示明年底将推出一款彩色显示器。正在研发替代显示技术的数家公司很快也将对E Ink构成竞争。

其中之一是友达光电的姊妹公司、代工厂商佳世达(Qisda)。佳世达正计划应用自产的显示器,在年底前批量生产电子书阅读器。友达光电则发布了一款6英寸大的可挠式显示屏,并表示将于明年投入量产。

与此同时,E Ink目前必须与越来越多的生产商进行合作。“因为这是一个新类别,它的发展空间不像成熟市场那么容易预测,”佩鲁文巴表示。

译者/何黎

Factories gear up in race for e-readers

By David Gelles in San Francisco and Robin Kwong in Taipei 2009-10-27

Global demand for e-reader devices is booming from San Francisco to Shanghai as bibliophiles around the world turn to digital books. As many as 5m e-readers are being produced this year internationally, and analysts say that could double in 2010.

A rush in orders is prompting suppliers to ramp up production capacity and is forcing manufacturers to reconfigure factories to meet orders.

At the centre of the manufacturing cycle for the devices is E Ink, the Boston-based producer of the majority of e-reader displays. E Ink holds about 90 per cent of the market, according to analysts, making displays for Amazon's Kindle, Sony's Reader and Barnes & Noble's Nook.

“The supply chain is concentrated and starts with one company,” says Sarah Rotman Epps, an analyst with Forrester Research, in reference to E Ink.

The deluge of orders E Ink has received in the second half of this year has forced the relatively small company to expand rapidly. It opened a factory near Boston this month and has increased revenues 250 per cent to $96m in the past nine months.

That the e-reader is in demand during the economic downturn makes it an especially compelling product. In search of a rare recession-defying hit, producers are investing in their technology and electronics manufacturers are turning over space for production.

“The demand is coming at a time when other non-e-reader categories are experiencing a decline, so there has been available factory space,” says E Ink vice-president of marketing Sri Peruvemba.

This is most apparent in Taiwan, where a number of big manufacturers have begun gearing up production plans for e-readers, an indication that companies believe the devices will become a mainstay in the broader consumer electronics industry.

Asus, the netbook pioneer, began preparations to enter the e-reader market in February, and plans to have its first, own-branded model ready by the end of this year at the earliest.

Wistron, the former contract manufacturing arm of Acer that was spun off in 2000, signalled its interest in the e-reader market when it last month acquired Polymer Vision, a Dutch e-paper company specialising in flexible, rollable displays.

Karl McGoldrick, chief executive of Polymer Vision, says there is “critical mass in potential volumes, critical mass in actual available content and attractive cost-price points” for e-readers. “To the extent that when a new industry has proven its growth potential, it is always a ripe time to get in,” he told the Financial Times.

Evidence of increasing investment came with Taiwan's two big flat panel makers, AU Optronics and Chi Mei Optoelectronics, both saying they have begun producing display panels and modules for e-readers.

AUO and CMO's entry into the market significantly eases mass production of e-readers because it ensures a readily available supply of a key component, analysts say.

“In terms of hardware, everything is ready, technology-wise,” said David Chen, senior industry analyst at Market Intelligence and Consulting Institute, a Taiwan government-backed think-tank.

The sudden popularity of e-readers has arrived in spite of the devices being short on sophistication. But the surge of interest has also forced E Ink and other producers to innovate rapidly in an effort to keep up with consumer expectations.

E Ink only distributes black and white displays so far, but says it will have a colour display on the market late next year. It will also soon face competition from several companies working on alternative display technologies.

Among them is Qisda, the contract manufacturing sister company of AU Optronics, which is planning to mass produce e-readers by the end of the year using its own displays. AU Optronics has unveiled a six-inch, flexible screen that it says will go into mass production next year.

At the same time, E Ink is having to work with a growing number of manufacturers. “Because it is a new category, it is not as easy to forecast the space as it is a more mature market,” said Mr Peruvemba.



2009年10月19日 星期一

Empirical Studies on Software Quality Mythology

Empirical Studies on Software Quality Mythology

Posted by Gavin Terrill on Oct 08, 2009

Community
Architecture,
Agile
Topics
Software Craftsmanship
Tags
Quality

Microsoft Research has released a summary of the results of empirical studies examining software engineering myths. The work, conducted by Nachi Nagappan, measures the impact on quality that common software engineering practices actually have. The analysis reveals:

  • More code coverage in testing doesn't necessarily correlate with a decrease in the number of post-release fixes required, citing many other factors that come into play.
  • TDD improves quality but takes longer (pdf): "What the research team found was that the TDD teams produced code that was 60 to 90 percent better in terms of defect density than non-TDD teams. They also discovered that TDD teams took longer to complete their projects—15 to 35 percent longer."
  • The use of assertions and code verification decreases bugs. Further, "Software engineers who were able to make productive use of assertions in their code base tended to be well-trained and experienced, a factor that contributed to the end results."
  • Organizational structure has a profound impact on quality: "Organizational metrics, which are not related to the code, can predict software failure-proneness with a precision and recall of 85 percent."
  • The impact of a team being distributed has a negligible impact on quality.

These research findings are now being used by Microsoft development groups, including helping with risk-analysis and bug-triaging for major projects such as Windows Vista SP2.

2009年10月11日 星期日

Training to Climb an Everest of Digital Data

Training to Climb an Everest of Digital Data


Published: October 11, 2009

MOUNTAIN VIEW, Calif. — It is a rare criticism of elite American university students that they do not think big enough. But that is exactly the complaint from some of the largest technology companies and the federal government.

Skip to next paragraph
Steve Ruark for The New York Times

Jimmy Lin is an associate professor at the University of Maryland.

At the heart of this criticism is data. Researchers and workers in fields as diverse as bio-technology, astronomy and computer science will soon find themselves overwhelmed with information. Better telescopes and genome sequencers are as much to blame for this data glut as are faster computers and bigger hard drives.

While consumers are just starting to comprehend the idea of buying external hard drives for the home capable of storing a terabyte of data, computer scientists need to grapple with data sets thousands of times as large and growing ever larger. (A single terabyte equals 1,000 gigabytes and could store about 1,000 copies of the Encyclopedia Britannica.)

The next generation of computer scientists has to think in terms of what could be described as Internet scale. Facebook, for example, uses more than 1 petabyte of storage space to manage its users’ 40 billion photos. (A petabyte is about 1,000 times as large as a terabyte, and could store about 500 billion pages of text.)

It was not long ago that the notion of one company having anything close to 40 billion photos would have seemed tough to fathom. Google, meanwhile, churns through 20 times that amount of information every single day just running data analysis jobs. In short order, DNA sequencing systems too will generate many petabytes of information a year.

“It sounds like science fiction, but soon enough, you’ll hand a machine a strand of hair, and a DNA sequence will come out the other side,” said Jimmy Lin, an associate professor at the University of Maryland, during a technology conference held here last week.

The big question is whether the person on the other side of that machine will have the wherewithal to do something interesting with an almost limitless supply of genetic information.

At the moment, companies like I.B.M. and Google have their doubts.

For the most part, university students have used rather modest computing systems to support their studies. They are learning to collect and manipulate information on personal computers or what are known as clusters, where computer servers are cabled together to form a larger computer. But even these machines fail to churn through enough data to really challenge and train a young mind meant to ponder the mega-scale problems of tomorrow.

“If they imprint on these small systems, that becomes their frame of reference and what they’re always thinking about,” said Jim Spohrer, a director at I.B.M.’s Almaden Research Center.

Two years ago, I.B.M. and Google set out to change the mindset at universities by giving students broad access to some of the largest computers on the planet. The companies then outfitted the computers with software that Internet companies use to tackle their toughest data analysis jobs.

And, rather than building a big computer at each university, the companies created a system that let students and researchers tap into giant computers over the Internet.

This year, the National Science Foundation, a federal government agency, issued a vote of confidence for the project by splitting $5 million among 14 universities that want to teach their students how to grapple with big data questions.

The types of projects the 14 universities have already tackled veer into the mind-bending. For example, Andrew J. Connolly, an associate professor at the University of Washington, has turned to the high-powered computers to aid his work on the evolution of galaxies. Mr. Connolly works with data gathered by large telescopes that inch their way across the sky taking pictures of various objects.

The largest public database of such images available today comes from the Sloan Digital Sky Survey, which has about 80 terabytes of data, according to Mr. Connolly. A new system called the Large Synoptic Survey Telescope is set to take more detailed images of larger chunks of the sky and produce about 30 terabytes of data each night. Mr. Connolly’s graduate students have been set to work trying to figure out ways of coping with this much information.

Purdue, meanwhile, looks to carry out techniques used to map the interactions between people in social networks into the biological realm. Researchers are creating complex diagrams that illuminate the links between chemical reactions taking place in cells.

A similar effort at the University of California, Santa Barbara, centers on making a simple software interface — akin to the Google search bar — that will let researchers examine huge biological data sets for answers to specific queries.

Mr. Lin has encouraged his students to illuminate data with the help of Hadoop, an open-source software package that companies like Facebook and Yahoo use to split vast amounts of information into more manageable chunks.

One of these projects included a deep dive into the reams of documents released after the government’s probe into Enron, to create an analysis system that could identify how one employee’s internal communications had been connected to those from other employees and who had originated a specific decision.

Mr. Lin shares the opinion of numerous other researchers that learning these types of analysis techniques will be vital for students in the coming years.

“Science these days has basically turned into a data-management problem,” Mr. Lin said.

By donating their computing wares to the universities, Google and I.B.M. hope to train a new breed of engineers and scientists to think in Internet scale. Of course, it’s not all good will backing these gestures. I.B.M. is looking for big data experts who can complement its consulting in areas like health care and financial services. It has already started working with customers to put together analytics systems built on top of Hadoop. Meanwhile, Google promotes just about anything that creates more information to index and search.

Nonetheless, the universities and the government benefit from I.B.M. and Google providing access to big data sets for experiments, simpler software and their computing wares.

“Historically, it has been tough to get the type of data these researchers need out of industry,” said James C. French, a research director at the National Science Foundation. “But we’re at this point where a biologist needs to see these types of volumes of information to begin to think about what is possible in terms of commercial applications.”

2009年10月6日 星期二

A High-Tech Hunt for Lost Art

Findings

A High-Tech Hunt for Lost Art

Kalpa Group Project

PAINTINGS Maurizio Seracini, on scaffolding, and the “Battle of Marciano” mural.


Published: October 5, 2009

Florence, Italy

Skip to next paragraph

TierneyLab

If Leonardo Da Vinci’s masterpiece is really hidden inside a wall in Florence, what should be done with it? Join the discussion.

Go to TierneyLab »

RSS Feed

Kalpa Group Project

Later art based on Leonardo da Vinci’s sketches for “The Battle of Anghiari.”

If you believe, as Maurizio Seracini does, that Leonardo da Vinci’s greatest painting is hidden inside a wall in Florence’s city hall, then there are two essential techniques for finding it. As usual, Leonardo anticipated both of them.

First, concentrate on scientific gadgetry. After spotting what seemed to be a clue to Leonardo’s painting left by another 16th-century artist, Dr. Seracini led an international team of scientists in mapping every millimeter of the wall and surrounding room with lasers, radar, ultraviolet light and infrared cameras. Once they identified the likely hiding place, they developed devices to detect the painting by firing neutrons into the wall.

“Leonardo would love to see how much science is being used to look for his most celebrated masterpiece,” Dr. Seracini said, gazing up at the wall where he hopes the painting can be found, and then retrieved intact. “I can imagine him being fascinated with all this high-tech gear we’re going to set up.”

Dr. Seracini was standing in the Palazzo Vecchio’s grand ceremonial chamber, the Hall of 500, which was the center of Renaissance politics when Leonardo and Michelangelo were commissioned to adorn it with murals of Florentine military victories. On this July day of 2009, it remained the political hub, as evidenced by the sudden appearance of Florence’s new mayor, Matteo Renzi, who was rushing from his office to a waiting car.

The scientific lecture ceased as Dr. Seracini moved quickly to intercept the mayoral entourage. He was eager to use the second essential strategy for retrieving a Leonardo painting in Florence: find the right patron.

That has always been a good tactic in the home of the Medicis and bureaucrats like Machiavelli, a friend of Leonardo’s who signed the contract commissioning the battle mural. Dr. Seracini, an engineering professor at the University of California, San Diego, had spent years in bureaucratic limbo waiting to try his neutron-beam technique, but he saw this new mayor as his best hope yet for finding Leonardo’s work.

The quest had begun more than three decades earlier with a clue fit for a Dan Brown novel. In 1975, after studying engineering in the United States, Dr. Seracini returned to his native Florence and surveyed the Hall of 500 with a Leonardo scholar, Carlo Pedretti.

They were looking for “The Battle of Anghiari,” the largest painting Leonardo ever undertook (three times the width of “The Last Supper”). Although it was never completed — Leonardo abandoned it in 1506 — he left a central scene of clashing soldiers and horses that was hailed as an unprecedented study of anatomy and motion. For decades, artists like Raphael went to the Hall of 500 to see it and make their own copies.

Then it vanished. During the remodeling of the hall in 1563, the architect and painter Giorgio Vasari covered the walls with frescoes of military victories by the Medicis, who had returned to power. Leonardo’s painting was largely forgotten.

But in 1975, when Dr. Seracini studied one of Vasari’s battle scenes, he noticed a tiny flag with two words, “Cerca Trova”: essentially, seek and ye shall find. Was this Vasari’s signal that something was hidden underneath?

The technology of the 1970s did not provide much of an answer. Dr. Seracini went on to make his name with scientific analyses of other works of art, and to found the Center of Interdisciplinary Science for Art, Architecture and Archaeology at U.C.S.D. In 2000 he returned to the hall with new technology and a new financial patron, Loel Guinness, a British philanthropist. By taking infrared pictures and laser-mapping the room, Dr. Seracini’s team discovered where the doors and windows had been before Vasari’s remodeling.

The reconstructed blueprint, combined with 16th-century documents, was enough to locate the spot painted by Leonardo. It also offered a potential explanation for Michelangelo’s failure to do anything more than an initial sketch for his mural: He must have been miffed that Leonardo was assigned a section of the wall with much better window light.

“This room is huge, but it wasn’t big enough for both Michelangelo and Leonardo,” Dr. Seracini said. (Visit nytimes.com/tierneylab for more details.)

The new analysis showed that the spot painted by Leonardo was right at the “Cerca Trova” clue. The even better news, obtained from radar scanning, was that Vasari had not plastered his work directly on top of Leonardo’s. He had erected new brick walls to hold his murals, and had gone to special trouble to leave a small air gap behind one section of the bricks — the section in back of “Cerca Trova.”

But how could anyone today know what lay behind the fresco and the bricks? How could anyone peer six inches into the wall without harming the historic fresco on the surface?

Dr. Seracini was stymied until 2005, when he appealed for help at a scientific conference and got a suggestion to send beams of neutrons harmlessly through the fresco. With help from physicists in the United States, Italy’s nuclear-energy agency and universities in the Netherlands and Russia, Dr. Seracini developed devices for identifying the telltale chemicals used by Leonardo.

One device can detect the neutrons that bounce back after colliding with hydrogen atoms, which abound in the organic materials (like linseed oil and resin) employed by Leonardo. Instead of using water-based paint for a traditional fresco in wet plaster like Vasari’s, Leonardo covered the wall with a waterproof ground layer and used oil-based paints.

The other device can detect the distinctive gamma rays produced by collisions of neutrons with the atoms of different chemical elements. The goal is to locate the sulfur in Leonardo’s ground layer, the tin in the white prime layer and the chemicals in the color pigments, like the mercury in vermilion and the copper in blue pigments of azurite.

Developing this technology was difficult, but not as big a challenge as getting permission to use it. Dr. Seracini kept running into political and bureaucratic dead ends. So when he saw the new mayor dashing across the Hall of 500 that July afternoon, Dr. Seracini rushed at the chance for a personal to appeal to Mr. Renzi, who had been a fan of the project before his election.

With the politesse of a Medici, the mayor paused and listened, then promised to further this artistic endeavor once he had dealt with his first batch of election pledges.

“My dream is to see this discovery very soon,” Mr. Renzi said. “Soon” can be a highly relative term in Italian bureaucracies, but the mayor did indeed go on to restart the approval process and meet with one of the current patrons of the project, the National Geographic Society. Last week, the mayor said he expected it to proceed shortly.

“We are very willing to give Professor Seracini permission,” Mr. Renzi said Thursday. “The only issue that remains concerns timing — who does what. Within a week or two it should get the go-ahead.”

Once he gets permission, Dr. Seracini said, he hopes to complete the analysis within about a year. If “The Battle of Anghiari” is proved to be there, he said, it would be feasible for Florentine authorities to bring in experts to remove the exterior fresco by Vasari, extract the Leonardo painting and then replace the Vasari fresco. Of course, no one knows what kind of shape the painting might be in today. But Dr. Seracini, who has extensively analyzed the damages suffered by many Renaissance paintings, said that he was optimistic about “The Battle of Anghiari.”

“The advantage is that it has been covered up for five centuries,” he said. “It’s been protected against the environment and vandalism and bad restorations. I don’t expect there to be much decay.”

If he is right, then perhaps Vasari did Leonardo a favor by covering up the painting — and taking care to leave that cryptic little flag above the trove.

Jessica Donati contributed reporting from Rome.

2009年10月5日 星期一

Nobel prize for chromosome find


Nobel prize for chromosome find

Chromosomes
Chromosomes house genetic material

This year's Nobel prize for medicine goes to three US-based researchers who discovered how the body protects the chromosomes housing vital genetic code.

Elizabeth Blackburn, Carol Greider and Jack Szostak jointly share the award.

Their work revealed how the chromosomes can be copied and has helped further our understanding on human ageing, cancer and stem cells.

The answer lies at the ends of the chromosomes - the telomeres - and in an enzyme that forms them - telomerase.

FROM THE PM PROGRAMME

The 46 chromosomes contain our genome written in the code of life - DNA.

When a cell is about to divide, the DNA molecules, housed on two strands, are copied.

But scientists had been baffled by an anomaly.

For one of the two DNA strands, a problem exists in that the very end of the strand cannot be copied.

Protecting the code of life

Therefore, the chromosomes should be shortened every time a cell divides - but in fact that is not usually the case.

If the telomeres did repeatedly shorten, cells would rapidly age.

The discoveries ... have added a new dimension to our understanding of the cell, shed light on disease mechanisms, and stimulated the development of potential new therapies
The Nobel Assembly

Conversely, if the telomere length is maintained, the cell would have eternal life, which could also be problematic. This happens in the case of cancer cells.

This year's prize winners solved the conundrum when they discovered how the telomere functions and found the enzyme that copies it.

Elizabeth Blackburn, of the University of California, San Francisco, and Jack Szostak, of Harvard Medical School, discovered that a unique DNA sequence in the telomeres protects the chromosomes from degradation.

Joined by Johns Hopkins University's Carol Greider, then a graduate student, Blackburn started to investigate how the teleomeres themselves were made and the pair went on to discover telomerase - the enzyme that enables DNA polymerases to copy the entire length of the chromosome without missing the very end portion.

Their research has led others to hunt for new ways to cure cancer.

It is hoped that cancer might be treated by eradicating telomerase. Several studies are under way in this area, including clinical trials evaluating vaccines directed against cells with elevated telomerase activity.

Some inherited diseases are now known to be caused by telomerase defects, including certain forms of anaemia in which there is insufficient cell divisions in the stem cells of the bone marrow.

The Nobel Assembly at Sweden's Karolinska Institute, which awarded the prize, said: "The discoveries... have added a new dimension to our understanding of the cell, shed light on disease mechanisms, and stimulated the development of potential new therapies."

Carol Greider, now 48, said she was phoned in the early hours with the news that she had won.

She said: "It's really very thrilling, it's something you can't expect."

Elizabeth Blackburn, now 60, shared her excitement, saying: "Prizes are always a nice thing. It doesn't change the research per se, of course, but it's lovely to have the recognition and share it with Carol Greider and Jack Szostak."

Professor Roger Reddel of the Children's Medical Research Institute in Sydney, Australia, said: "The telomerase story is an outstanding illustration of the value of basic research."

Sir Leszek Borysiewicz, chief executive of the Medical Research Council, said: "The Medical Research Council extends its congratulations to Blackburn, Greider and Szostak on winning the 2009 Nobel Prize.

"Their research on chromosomes helped lay the foundations of future work on cancer, stem cells and even human ageing, research areas that continue to be of huge importance to the scientists MRC funds and to the many people who will ultimately benefit from the discoveries they make."





美3科學家 獲諾貝爾醫學獎
2009年諾貝爾醫學獎得主傑克.索斯塔克。(路透)
2009年諾貝爾醫學獎得主伊麗莎白.布萊克本(右)和凱蘿.格蕾德(左)。(歐新社檔案照)

兩女性同獲獎 醫學獎第一次

〔編 譯魏國金/綜合斯德哥爾摩五日外電報導〕包括兩名女性在內的三名美國科學家,因發現染色體端粒(telomere)與端粒?(telomerase),進 而啟發癌症與老化過程的新研究,五日獲頒諾貝爾醫學獎殊榮。諾貝爾獎委員會也指出,這是醫學獎有史以來首次有兩名女性同時獲獎。

諾貝爾獎評審團表示,美國科學家伊麗莎白.布萊克本、凱蘿.格蕾德與傑克.索斯塔克因發現染色體如何被端粒保護,以及端粒?(酵素)在維持或去除這個重要保護罩上扮演何種角色而獲獎。評審團說:「諾貝爾獎肯定這項對細胞基礎機制的發現,該發現已激勵新醫療對策的發展。」

六十歲的布萊克本為澳洲裔,目前在加州大學舊金山分校擔任生物與生理學教授。她在凌晨二時被喚醒告知獲獎,她說:「得獎不會改變研究本身,不過能有這份肯定,並能與格蕾德、索斯塔克共享殊榮,令人高興。」

四十八歲的格蕾德則是在近清晨五點清洗衣物時被電告得獎。目前在約翰霍普金斯大學執教的格蕾德表示,該研究始於理解細胞如何運作的實驗,而非源於某種醫學應用的構想。她說:「出資贊助這類由好奇心驅使的科學,真的非常重要。」

倫敦出生的索斯塔克自一九七九年起便任職於哈佛醫學院,他說這份榮譽因與布萊克本、格蕾德共享而更為甜美,他期待有個大型歡慶派對。

發現端粒? 有助抗癌抗老化

三人將共同分享一千萬瑞典克朗(約台幣四千五百八十萬元)獎金。他們也曾因相同研究而於二○○六年獲頒有「美國諾貝爾獎」之稱的拉斯克獎。

染色體是載有遺傳物質DNA的棒狀結構,端粒則是老化的關鍵因素,它們就像是裝置在染色體末端的保護罩。布萊克本與索斯塔克於一九八二年發現,當細胞分裂時,端粒中一個獨特的DNA序列有保護染色體免於降解之效。

在 格蕾德協助下,布萊克本也辨識出能製造端粒DNA的端粒?,倘若端粒耗損,細胞就會變老。然而,如果端粒?的濃度高,端粒長度得以維持,細胞的老化便可獲 得抑制。同時,端粒?濃度過高將使細胞無止境地複製,進而可能引發癌症。研究發現,端粒?在許多癌細胞上非常活躍,透過端粒?抑制劑找到阻斷該機制的方 式,因而成為癌症研究中最熱切探索的領域。