2008年5月31日 星期六

微軟展示新視窗觸摸螢幕功能

微軟展示新視窗觸摸屏功能
微軟新操作系統將用觸摸屏代替鼠標
微軟新操作系統將用觸摸屏代替鼠標

微軟公司周二(27日)表示,該公司下一代操作系統將針對觸摸屏應用而製造,以替代傳統的電腦鼠標。

該公司首席執行官還證實,公司仍有興趣與雅虎公司結成合作關係。

微軟希望新視窗操作系統在市場上得到的反應好于Vista操作系統。

這個計劃在2009年上市的微軟操作系統主要是通過觸摸屏放大和縮小圖片,搜尋地圖路線、畫圖和彈鋼琴。

微軟公司董事長蓋茨表示:“使用者與新操作系統的互動將會發生很大變化。”

蓋茨在美國聖迭戈舉行的“徹底數字化"大會上介紹說,Window7將採納通訊和互動的最新形式。

蓋茨將視窗新功能定義為棄用鼠標的自然進化。他說:"今天幾乎所有的用戶界面都採用鍵盤和鼠標,幾年後,語音、視覺、油墨等都將成為新的界面。"

微軟公司首席執行官鮑爾默表示,會議上展示的Windows7操作系統只是一個縮影。

雖然承認有所失策,但微軟兩位負責人仍為其已經上市的操作系統Vista進行辯護。

蓋茨說,他從未對微軟任何產品感到百分之百的滿意。他強調:"與某些產品相比,Vista給了我們更多的發揮公司文化的機會。"

鮑爾默說,在本月早些時候微軟475億美元收購方案被雅虎拒絕之後,微軟仍在與雅虎談判合作問題。

他說:"微軟沒有向雅虎再次提出收購方案,我們保留收購權利,但目前不在考慮之列。"

Billboards That Look Back

Billboards That Look Back

Hiroko Masuike for The New York Times

Rick Rivera, left, and Nathan Lichon are drawn to an electronic billboard in Manhattan.


Published: May 31, 2008

In advertising these days, the brass ring goes to those who can measure everything — how many people see a particular advertisement, when they see it, who they are. All of that is easy on the Internet, and getting easier in television and print.

Skip to next paragraph
Hiroko Masuike for The New York Times

The ad is equipped with a camera that gathers details on passers-by.

Billboards are a different story. For the most part, they are still a relic of old-world media, and the best guesses about viewership numbers come from foot traffic counts or highway reports, neither of which guarantees that the people passing by were really looking at the billboard, or that they were the ones sought out.

Now, some entrepreneurs have introduced technology to solve that problem. They are equipping billboards with tiny cameras that gather details about passers-by — their gender, approximate age and how long they looked at the billboard. These details are transmitted to a central database.

Behind the technology are small start-ups that say they are not storing actual images of the passers-by, so privacy should not be a concern. The cameras, they say, use software to determine that a person is standing in front of a billboard, then analyze facial features (like cheekbone height and the distance between the nose and the chin) to judge the person’s gender and age. So far the companies are not using race as a parameter, but they say that they can and will soon.

The goal, these companies say, is to tailor a digital display to the person standing in front of it — to show one advertisement to a middle-aged white woman, for example, and a different one to a teenage Asian boy.

“Everything we do is completely anonymous,” said Paolo Prandoni, the founder and chief scientific officer of Quividi, a two-year-old company based in Paris that is gearing up billboards in the United States and abroad. Quividi and its competitors use small digital billboards, which tend to play short videos as advertisements, to reach certain audiences.

Over Memorial Day weekend, a Quividi camera was installed on a billboard on Eighth Avenue near Columbus Circle in Manhattan that was playing a trailer for “The Andromeda Strain,” a mini-series on the cable channel A&E.

“I didn’t see that at all, to be honest,” said Sam Cocks, a 26-year-old lawyer, when the camera was pointed out to him by a reporter. “That’s disturbing. I would say it’s arguably an invasion of one’s privacy.”

Organized privacy groups agree, though so far the practice of monitoring billboards is too new and minimal to have drawn much opposition. But the placement of surreptitious cameras in public places has been a flashpoint in London, where cameras are used to look for terrorists, as well as in Lower Manhattan, where there is a similar initiative.

Although surveillance cameras have become commonplace in banks, stores and office buildings, their presence takes on a different meaning when they are meant to sell products rather than fight crime. So while the billboard technology may solve a problem for advertisers, it may also stumble over issues of public acceptance.

“I guess one would expect that if you go into a closed store, it’s very likely you’d be under surveillance, but out here on the street?” Mr. Cocks asked. At the least, he said, there should be a sign alerting people to the camera and its purpose.

Quividi’s technology has been used in Ikea stores in Europe and McDonald’s restaurants in Singapore, but it has just come to the United States. Another Quividi billboard is in a Philadelphia commuter station with an advertisement for the Philadelphia Soul, an indoor football team. Both Quividi-equipped boards were installed by Motomedia, a London-based company that converts retail and street space into advertisements.

“I think a big part of why it’s accepted is that people don’t know about it,” said Lee Tien, senior staff attorney for the Electronic Frontier Foundation, a civil liberties group.

“You could make them conspicuous,” he said of video cameras. “But nobody really wants to do that because the more people know about it, the more it may freak them out or they may attempt to avoid it.”

And the issue gets thornier: the companies that make these systems, like Quividi and TruMedia Technologies, say that with a slight technological addition, they could easily store pictures of people who look at their cameras.

The companies say they do not plan to do this, but Mr. Tien said he thought their intentions were beside the point. The companies are not currently storing video images, but they could if compelled by something like a court order, he said.

For now, “there’s nothing you could go back to and look at,” said George E. Murphy, the chief executive of TruMedia who was previously a marketing executive at DaimlerChrysler. “All it needs to do is look at the audience, process what it sees and convert that to digital fields that we upload to our servers.”

TruMedia’s technology is an offshoot of surveillance work for the Israeli government. The company, whose slogan is “Every Face Counts,” is testing the cameras in about 30 locations nationwide. One TruMedia client is Adspace Networks, which runs a network of digital screens in shopping malls and is testing the system at malls in Chesterfield, Mo., Winston-Salem, N.C., and Monroeville, Pa. Adspace’s screens show a mix of content, like the top retail deals at the mall that day, and advertisements for DVDs, movies or consumer products.

Within advertising circles, these camera systems are seen as a welcome answer to the longstanding problem of how to measure the effectiveness of billboards, and how to figure out what audience is seeing them. On television, Nielsen ratings help marketers determine where and when commercials should run, for example. As for signs on highways, marketers tend to use traffic figures from the Transportation Department; for pedestrian billboards, they might hire someone to stand nearby and count people as they walk by.

The Internet, though, where publishers and media agencies can track people’s clicks for advertising purposes, has raised the bar on measurement. Now, it is prodding billboards into the 21st century.

“Digital has really changed the landscape in the sort of accuracy we can get in terms of who’s looking at our creative,” Guy Slattery, senior vice president for marketing for A&E, said of Internet advertising. With Quividi, Mr. Slattery said, he hoped to get similar information from what advertisers refer to as the out-of-home market.

“We’re always interested in getting accurate data on the audience we’re reaching,” he said, “and for out-of-home, this promises to give a level of accuracy we’re not used to seeing in this medium.”

Industry groups are scrambling to provide their own improved ways of measuring out-of-home advertising. An outdoor advertising association, the Traffic Audit Bureau, and a digital billboard and sign association, the Out-of-Home Video Advertising Association, are both devising more specific measurement standards that they plan to release by the fall.

Even without cameras, digital billboards encounter criticism. In cities like Indianapolis and Pittsburgh, outdoor advertising companies face opposition from groups that call their signs unsightly, distracting to drivers and a waste of energy.

There is a dispute over whether digital billboards play a role in highway accidents, and a national study on the subject is expected to be completed this fall by a unit of the Transportation Research Board. The board is part of a private nonprofit institution, the National Research Council.

Meanwhile, privacy concerns about cameras are growing. In Britain, which has an estimated 4.2 million closed-circuit television cameras — one for every 14 people — the matter has become a hot political issue, with some legislators proposing tight restrictions on the use and distribution of the footage.

Reactions to the A&E billboard in Manhattan were mixed. “I don’t want to be in the marketing,” said Antwann Thomas, 17, a high school junior, after being told about the camera. “I guess it’s kind of creepy. I wouldn’t feel safe looking at it.”

But other passers-by shrugged. “Someone down the street can watch you looking at it — why not a camera?” asked Nathan Lichon, 25, a Navy officer.

Walter Peters, 39, a truck driver for a dairy, said: “You could be recorded on the street, you could be recorded in a drugstore, whatever. It doesn’t matter to me. There’s cameras everywhere.”

2008年5月29日 星期四

奇跡のミッキーマウス

 【日經BP社報導】 日前,筆者在採訪松下電器產業時,得知了在數位產品用統合平臺“UniPhier”開發中的一些小插曲。該公司于2007年開發的SoC “UniPhier PH1-ProII”採用了H.264編碼器,該編碼器可以把高畫質影像在保持HDTV解像度的情況下,將數據量壓縮至原來的1/4。可以說正是此 H.264編碼器晶片使藍光光碟/DVD錄影機實現了“4倍速錄影”。

  在開發過程中,因H.264編碼器的電路規模過大,開發人員認為無法實現單晶片化,但經研究後認為,如採用45nm製程就能可實現。但為趕上藍光光碟/DVD錄影機的上市日期,45nm製程的計劃需要提前半年開始實施。由於日程緊迫,技術人員們拼命加班加點趕進度。

  但是當進入晶片佈局設計的最後階段時,卻碰上了電路無法接線的問題。PH1-ProII的電晶體有2億5000萬個(參閱本站報導)。由於電路規模太大,連熟練的佈局設計師也束手無策。就在這一籌莫展的時刻,“救世主”出現了。這位技術人員以神奇的方法連接了電路,完成了精彩佈局。由於佈局酷似米老鼠的輪廓,因此有人稱之為“神奇米老鼠”。

  實際上,這位技術人員並不是佈局設計高手,而是系統架構師。正因為其熟知系統中各電路的工作原理,所以才用佈局設計員想不到的方法,完成了電路接線。常有人說半導體部門與產品部門的合作對於SoC設計多麼重要,以上這個小插曲就是一個典型事例。

  不只是這個插曲,在UniPhier的開發中,不同部門的技術人員相互協助解決難題的場面比比皆是。有些甚至是從其他部門覓得的專家解決了棘手的關鍵問題。筆者在嘆服該公司人才儲備雄厚的同時,還深切感到跨部門合作的重要性。

  有關UniPhier開發中發生的故事預定從2008年6月16日刊開始,在《日經電子》的“實錄”欄目上進行連載。敬請期待。(記者:木村 雅秀)

■日文原文
奇跡のミッキーマウス

基礎研究のムダから逃げてはいけない

這幾年日本正在重新審視製造業。製造業一個時期曾熱心於大舉進軍海外,但是對於日本這種無資源國家,如果把製造部門全部搬到海外,就剩不下什麼了。

  堅守日本,可能就會有人反駁說“這是一個全球化時代”。但是,從企業持續產生附加值的角度來看,在研究開發的速度和機動性方面還是日本國內具有優勢,因為這是競爭力的源泉。於是,眾多企業從一窩蜂進軍海外開始回歸日本。這是重新審視製造業後的結果。

  我認為日本人的好奇心、專注、勤奮、熱情和細緻適合發展製造業。而且,日本人非常重視團隊合作。這種國民性是很重要的因素。雖然既平凡又不起眼,但我們依然打算堅守製造業。製造消費者滿意的高功能、高品質產品,從而體驗生活的意義和樂趣,難道不是好事嗎。

  然而,顧客的要求隨時代在變。如從前,洗衣機大大節約了上班族和主婦的時間。年輕人也許不知道,過去人們是用洗衣板搓洗衣服的。從這點來看,恐怕沒有什麼能像洗衣機一樣給人帶來如此的便利。但是現在,如果問人們在生活中想要的是什麼,恐怕沒有人會說是洗衣機。

  要想滿足人們在不同時期的需要,必須提前下工夫。但麻煩的是製造需要花費時間。開發大多以10年為單位,如果不順利,還有可能長達25年。因此,如果 研究所沒有在20年前提出方向,就開發不出產品。現在手機和平板電視開始在全球普及,其研究開發的方向也同樣必須在20年前確定。單純追逐眼前的東西是不 行的。

——研究開發時能看到20年之後的情況嗎?

  提前20年啟動研究開發的成功率當然低。這是因為開發中有些地方必須經過嘗試才能明確,而且也會存在預測失誤。因此,在進行基礎研究時,必須有經歷失敗的心理準備。

  對於失敗,我們也需要有看清其有無意義的洞察力。擁有各種專長,對於培養這種洞察力非常有效。擁有多專長的公司具有更寬廣的視野。

  基礎研究不常會有優秀的成果,這很正常。因此,某種程度的失敗,也就是研究開發的“無用功”無法避免,而且是必要的。看到結果再下手是行不通的。民間企業也認為有意義的失敗是必要的,並在高層的直接領導下推進。

  我本人也經歷過很多失敗。但之後都成為了經驗。從個人經驗看,沒有人未曾失敗過。確實,失敗在某種意義上是做無用功。被斥責“為什麼事先沒注意到?”也是理所應當。但是,人非聖賢,孰能無過?從失敗中培養洞察力更為重要。

  那麼,如今日本的基礎研究究竟如何呢?確定有好結果的研究項目會隨即會在全球一哄而起。就算開展也沒有意義。研究必須以未知領域為對象。沒有幾次有意義的失敗,基礎研究就不會發出新芽。說成功率是“千三”(1000次中有3次成功)可能有點誇張,但確實不算高。

  只靠應用也能夠製造產品,但是當有問題時,能夠解決到什麼程度就不好說了。基礎不紮實是不行的。不長期專注于基礎,就不可能強大。這就是我的想法。

簡歷

莊山 悅彥:1959年4月進入日立製作所。1985年6月任國分工廠廠長,1987年2月任栃木工廠廠長,1990年8月任消費產品業務 部業務部長,1991年6月任董事兼AV產品業務部業務部長,1993年6月任常務董事兼家電業務本部業務本部長,1995年6月任專務董事兼家電及資訊 媒體業務本部業務本部長,1997年6月任董事長副社長,1999年4月任董事長社長,2006年4月任董事兼首席執行官,2007年4月任董事會長至 今。

■日文原文
基礎研究のムダから逃げてはいけない

A small bite for a monkey... a giant leap for mankind

《獨立報》的頭條報道了一頭猴子吃香蕉的消息,並配以大字標題說:"猴子吃一小口,人類走一大步"。 雖然猴子吃香蕉並非什麼新聞,但是這頭猴子被訓練成光用思維就可以控制機器膀臂餵它吃香蕉。 這項突破有望幫助癱瘓的病人。 《獨立報》援引科學家說,他們通過一系列的電極把猴子的腦部連接到電腦上,猴子可以通過思維,控制電腦移動機器膀臂。 報道說,通過思維控制機器膀臂或其他自動裝置,有助癱瘓病人進行思維控制。 科學家計劃最終使用這種科技能應用於完全無法移動身體的脊神經受損或運動神經元病的病人身上。 《獨立報》說,科學家希望有朝一日開發出一些機器可以像人類身體的自然延伸,讓科技協助進行不同的活動,例如駕車甚至操作起重機。

  1. BMI: the research that holds the key to hope for millions [64%]

    When monkeys in Carolina remotely operated a robotic arm 600 miles away at the leading Institute of Technology in Massachusetts in 2000, using only their brain signals, the inte- 29/05/2008, Science



A small bite for a monkey... a giant leap for mankind

Hope for paralysis victims as animal is trained to control robotic arm using only its thoughts

By Steve Connor, Science Editor
Thursday, 29 May 2008

Two monkeys have been trained to eat morsels of food using a robotic arm controlled by thoughts that are relayed through a set of electrodes connecting the animal's brain to a computer, scientists have announced.

The astonishing feat is being seen as a major breakthrough in the development of robotic prosthetic limbs and other automated devices that can be manipulated by paralysed patients using mind control alone.

Scientists eventually plan to use the technology in the development of prosthetics for people with spinal cord injuries or conditions such as motor neurone disease, where total paralysis leaves few other options for controlling artificial limbs or wheelchairs. They hope one day to develop robotic machines that feel like a natural extension of the human body, which would enable the technology to be adapted for a wide variety of purposes, from driving a car to operating a fork-lift truck.

Andrew Schwartz, professor of neurobiology at the University of Pittsburgh, said that the monkeys were able to move the robot arm to bring pieces of marshmallow or fruit to their mouths in a set of "fluid and well-controlled" movements.

"Now we are beginning to understand how the brain works using brain-machine interface technology," said Dr Schwartz, whose study is published online in the journal Nature.

Video courtesy of Andrew Schwartz/ Univerisity of Pittsburgh

"The more we understand about the brain, the better we'll be able to treat a wide range of brain disorders, everything from Parkinson's disease and paralysis to, eventually, Alzheimer's disease and perhaps even mental illness," he said.

The study is part of a larger effort to find ways of tapping into the brain's complicated electrical activity that controls the movement of muscles. Eventually, scientists hope to develop a way of detecting brain patterns that signify a person's intentions regarding the movement of a limb.

The technology is known as the "brain-machine interface" which hopes to connect the silicon hardware of the microprocessor with the carbon-based "software" of the human nervous system so that machines can be controlled by the mind.

"Our immediate goal is to make a prosthetic device for people with total paralysis. Ultimately, our goal is to better understand brain complexity," Dr Schwartz said.

The monkeys in the experiment had been initially trained to control the robot arm with a joystick operated by the animals' own hands. Later on, the monkeys' arms were gently restrained and they were trained to use electrical patterns in the motor centre of their brain – which controls muscle movements – to operate the robotic arm.

The scientists said that they were astonished to find how easy it was for them to train the monkeys to move the robotic arm, which appeared to be readily accepted by the animals as a useful eating tool.

The scientists used electrodes that monitored a representative sample of about 100 brain cells out of the many millions that are activated when the motor centre is involved in muscular movement. The electrical patterns were sent to a computer which had been programmed to analyse the patterns and use them to control the movement of the robotic arm, which consisted of a shoulder joint, an elbow joint and a claw-like gripper "hand".

"The monkey learns by first observing the movement, which activates the brain cells as if he were doing it. It's a lot like sports training, where trainers have athletes first imagine that they are performing the movements they desire," Dr Schwartz said.

The robotic arm used in the experiment had five degrees of freedom – three at the shoulder, one at the elbow and one at the hand, which was supposed to emulate the movement of the human arm. The training of the monkeys took several days using food as rewards.

Previous work by the group has concentrated on training monkeys to move cursors of a computer screen but the latest study using a robotic arm involved a more complicated system of movements, the scientists said.

John Kalaska, of the University of Montreal, said that the experiment is the first demonstrated use of "brain-machine interface technology" to perform a practical behavioural act such as feeding. "It represents the current state of the art in the development of neuroprosthetic controllers for complex arm-like robots that could one day, in principle, help patients perform many everyday tasks such as eating, drinking from a glass or using a tool," Dr Kalaska said.

"One encouraging finding was how readily the monkeys learnt to control the robot... Equally encouraging was how naturally the monkeys controlled and interacted with the robot," he said.

"They made curved trajectories of the gripper through space to avoid obstacles, made rapid corrections in the trajectory when the experimenter unexpectedly changed the location of the food morsel, and even used the gripper as a prop to push a loose treat from their lips into their mouth," Dr Kalaska said.

In 2000, scientists at the Massachusetts Institute of Technology were the first to show that it is possible to record the neural activity in a monkey's brain and send it over the internet to control the movement of a remotely controlled robotic arm in a laboratory 600 miles away. The team also used microelectrodes implanted into the monkey's brain but it did not involve using the robot arm for a useful task such as feeding.

Dr Kalaska said the next task was to develop a way of sending sensory information back to the monkey through the robotic arm so that the animal knows how hard to grip an object, which is essential for human interactions.

"For physical interactions with the environment, the subject must also be able to sense and control the forces exerted by the robot on any object or surface so that, for instance, they can pick up an object with a strong enough grip to prevent it slipping from the robotic hand but not so strong as to crush it," he said. "These and other technical issues are challenging, but not insurmountable."

Just a Twist (and Tilt) of the Wrist--accelerometer.

Basics

Just a Twist (and Tilt) of the Wrist

Controlling the velocity and direction of a radio-controlled pickup truck by maneuvering a cellphone equipped with an accelerometer.


Published: May 29, 2008

WHEN Scott Forstall, a senior vice president at Apple, demonstrated a new space game for the iPhone, he let the spacecraft cruise through a field of stars for a bit. As the audience finished absorbing the look of the shooter set in a distant galaxy, he asked them, “I don’t have a joystick on here, or any four-button toggle, so how do I steer?”

A few seconds later, he tilted the phone just a few degrees and the ship shifted course on the screen. “We have a full three-axis accelerometer in here, so all I’ve got to do is move the phone around and now I’m steering it,” he said. The crowd of programmers cheered.

The iPhone is not unique. Nintendo’s popular Wii game console uses similar technology, and many cellphones, computers and other electronic gadgets are gaining a sensitivity to motion.

Manufacturers are increasingly embedding accelerometers and other sensors into the machines, which allow them to respond to movement without waiting for their humans to push a button. Game designers and other programmers are jumping to remake user interfaces so that users can direct gadgets with a nudge, a tilt or a shake.

Some of the applications are silly. The programmer Graham Oldfield turned his Nokia N95 cellphone into a virtual light saber by writing software that tracked the phone’s movement using the built-in accelerometer.

When the phone is still, it emits a low hum, but when the user waves it, the pitch and volume increase just like the weapons in the “Star Wars” movies. If the phone is abruptly stopped, it assumes it encountered something and provides a proper cracking sound. Version 1.5 of the software, available free from Mr. Oldfield’s Web site (graho.wordpress.com), adds tactile feedback through the vibrating ringer, a feature Mr. Oldfield calls SaberTingle.

Versions of the popular video game Snake from the 1970s are now available for iPhone and Nokia phones with accelerometers. Tilting the phone guides a snake to dinner, a process that gets harder and harder as the snake grows.

Andreas Jakl and Stephan Selinger, professors at the University of Applied Sciences in Hagenberg, Austria, transformed a Nokia N95 cellphone into a steering wheel for a radio-controlled car (www.symbianresources.com/projects). Turn the phone to the left and the car turns left.

Professor Jakl also worked with a student to develop software that turned a Nokia phone into a device that records the movement of a skier or a snowboarder. “We’re from Austria and we’re a skiing nation,” he said. “You put your phone into your pocket. It will record your jumps, how often you jump, how often you crash. You can compare it with your friends to see who jumped longer.”

Not all of the applications are about having fun. FlipSilent and ShakeSMS are two freeware tools that let the accelerometer control the user interface. If the phone rings, FlipSilent will shut off the ringer when the phone is turned over. ShakeSMS will page through text messages with a shake and a tilt with no pushing of buttons. Both are available from flipsilent.com.

Accelerometers started appearing in cellphones in the early 1990s. Michael Markowitz, a spokesman for STMicroelectronics, says the company’s line of accelerometers helps to activate air bags in cars and protect hard drives in a laptop should it be suddenly dropped.

The current generation of accelerometers are tiny blocks of silicon carved out of wafers using many of the same techniques used to create transistors and circuitry. Circuits built on the same chips sense how movement pushes the blocks along. For instance, the LIS302DL from STMicroelectronics, used in some phones, is only 3 millimeters by 5 millimeters by 1 millimeter, and can measure forces up to 8 times the earth’s gravity along three axes.

Translating the data generated by the chips into results that a user can see requires some artful software. Johnny Chung Lee, a researcher at Carnegie Mellon University, recently built a pen that gives users the illusion they are pushing hard or soft 3-D buttons on a flat computer screen, enhancing their feel of virtual objects. The software follows the user’s movement of the pen and a computer calculates how much pressure should be applied.

One problem with accelerometers is that they detect acceleration — the rate of change in velocity — but not the movement itself, Dr. Lee said. If a computer wants to know the speed or the position, it must apply common equations from high school calculus. Any noise or error is compounded over time.

Dr. Lee says that error is why many designers pair accelerometers with other sensors, like magnetometers that follow the earth’s magnetic field or G.P.S. radio systems that use a network of satellites to detect geographic position. For example, the controllers for the Sony PlayStation 3 come with sensors that detect angles, tilting, thrusting and pulling. Fusing the results of several sensors gives the device a better sense of where it is going and where it has been.

Game designers are just beginning to tap into the power of accelerometers to transform cellphones into more intuitive game devices. Travis Boatman, a vice president at EA Mobile, a division of Electronic Arts, says that placing the accelerometer in the same place as the screen helps simplify the interface while eliminating the need for the user’s brain to make a connection between a push of a button and an action on a screen across the room.

In E.A.’s forthcoming game Spore, the company modified the control system in the iPhone version to eliminate buttons used in the PC version. “You control where the spore goes, but everything else is controlled for you. It eats automatically,” said Mr. Boatman.

Mr. Boatman says one challenge is that accelerometers are more sensitive than humans, who often cannot tell whether the device is tilting 5, 10 or 12 degrees. The solution lies in blurring measurements and programming the devices to react to general movement.

Moving a device around can be more intuitive than pressing buttons, said Ethan Einhorn, a developer for Sega of America who has been experimenting with bringing a game called Super Monkey Ball to the iPhone.

“I’ve been able to hand the game off to people who are not gamers and they’re instantly able to understand what they’re supposed to do. There’s a direct connection: I tilt it and the monkey moves,” Mr. Einhorn said.

These opportunities are just beginning to inspire others. Microsoft’s research division recently demonstrated a small PC with a screen that will detect and respond to twisting, stretching, bending and squeezing to move a mouse, switch programs or perform whatever other functions that programmers can dream up.

The Future of the Internet — And How to Stop It


Jonathan Zittrain, The Future of the Internet — And How to Stop It

– The book is available to download under a Creative Commons Attribution Non-Commercial Share-Alike 3.0 license: Download PDF.

– The book can be viewed in an experimental html format courtesy of Yale University Press and the futureofthebook.org people. (The format is experimental; html is probably safely thought of as in full production as this point.) Each paragraph can be annotated: Visit html site.

– Tony Curzon Price at OpenDemocracy is leading a group annotation of the book at Diigo.

– Amazon has enabled search-inside-the-book: Visit Amazon version.

– Still working on Google Books version.




Synopsis:

This extraordinary book explains the engine that has catapulted the Internet from backwater to ubiquity—and reveals that it is sputtering precisely because of its runaway success. With the unwitting help of its users, the generative Internet is on a path to a lockdown, ending its cycle of innovation—and facilitating unsettling new kinds of control.

IPods, iPhones, Xboxes, and TiVos represent the first wave of Internet-centered products that can’t be easily modified by anyone except their vendors or selected partners. These “tethered appliances” have already been used in remarkable but little-known ways: car GPS systems have been reconfigured at the demand of law enforcement to eavesdrop on the occupants at all times, and digital video recorders have been ordered to self-destruct thanks to a lawsuit against the manufacturer thousands of miles away. New Web 2.0 platforms like Google mash-ups and Facebook are rightly touted—but their applications can be similarly monitored and eliminated from a central source. As tethered appliances and applications eclipse the PC, the very nature of the Internet—its “generativity,” or innovative character—is at risk.

The Internet’s current trajectory is one of lost opportunity. Its salvation, Zittrain argues, lies in the hands of its millions of users. Drawing on generative technologies like Wikipedia that have so far survived their own successes, this book shows how to develop new technologies and social structures that allow users to work creatively and collaboratively, participate in solutions, and become true “netizens.”

Book Excerpt:

Introduction
On January 9, 2007, Steve Jobs introduced the iPhone to an eager audience crammed into San Francisco’s Moscone Center.1 A beautiful and brilliantly engineered device, the iPhone blended three products into one: an iPod, with the highest-quality screen Apple had ever produced; a phone, with cleverly integrated functionality, such as voicemail that came wrapped as separately accessible messages; and a device to access the Internet, with a smart and elegant browser, and with built-in map, weather, stock, and e-mail capabilities. It was a technical and design triumph for Jobs, bringing the company into a market with an extraordinary potential for growth, and pushing the industry to a new level of competition in ways to connect us to each other and to the Web.

This was not the first time Steve Jobs had launched a revolution. Thirty years earlier, at the First West Coast Computer Faire in nearly the same spot, the twenty-one-year-old Jobs, wearing his first suit, exhibited the Apple II personal computer to great buzz amidst “10,000 walking, talking computer freaks.”2 The Apple II was a machine for hobbyists who did not want to fuss with soldering irons: all the ingredients for a functioning PC were provided in a convenient molded plastic case.

It looked clunky, yet it could be at home on someone’s desk. Instead of puzzling
over bits of hardware or typing up punch cards to feed into someone else’s mainframe,
Apple owners faced only the hurdle of a cryptic blinking cursor in the upper
left corner of the screen: the PC awaited instructions. But the hurdle was not
high. Some owners were inspired to program the machines themselves, but true
beginners simply could load up software written and then shared or sold by their
more skilled or inspired counterparts. The Apple II was a blank slate, a bold departure
from previous technology that had been developed and marketed to perform
specific tasks from the first day of its sale to the last day of its use.
The Apple II quickly became popular. And when programmer and entrepreneur
Dan Bricklin introduced the first killer application for the Apple II in
1979—VisiCalc, the world’s first spreadsheet program—sales of the ungainly
but very cool machine took off dramatically.3 An Apple running VisiCalc
helped to convince a skeptical world that there was a place for the PC at everyone’s
desk and hence a market to build many, and to build them very fast.
Though these two inventions—iPhone and Apple II—were launched by
the same man, the revolutions that they inaugurated are radically different. For
the technology that each inaugurated is radically different. The Apple II was
quintessentially generative technology. It was a platform. It invited people to
tinker with it. Hobbyists wrote programs. Businesses began to plan on selling
software. Jobs (and Apple) had no clue how the machine would be used. They
had their hunches, but, fortunately for them, nothing constrained the PC to
the hunches of the founders. Apple did not even know that VisiCalc was on the
market when it noticed sales of the Apple II skyrocketing. The Apple II was designed
for surprises—some very good (VisiCalc), and some not so good (the
inevitable and frequent computer crashes).

The iPhone is the opposite. It is sterile. Rather than a platform that invites innovation,
the iPhone comes preprogrammed. You are not allowed to add programs
to the all-in-one device that Steve Jobs sells you. Its functionality is locked
in, though Apple can change it through remote updates. Indeed, to those who
managed to tinker with the code to enable the iPhone to support more or different
applications,4 Apple threatened (and then delivered on the threat) to transform
the iPhone into an iBrick.5 The machine was not to be generative beyond the innovations
that Apple (and its exclusive carrier, AT&T) wanted. Whereas the world
would innovate for the Apple II, only Apple would innovate for the iPhone. (A
promised software development kit may allow others to program the iPhone with
Apple’s permission.)

Jobs was not shy about these restrictions baked into the iPhone. As he said at
its launch:

We define everything that is on the phone. . . . You don’t want your phone to be like
a PC. The last thing you want is to have loaded three apps on your phone and then
you go to make a call and it doesn’t work anymore. These are more like iPods than
they are like computers.6

No doubt, for a significant number of us, Jobs was exactly right. For in the
thirty years between the first flashing cursor on the Apple II and the gorgeous
iconized touch menu of the iPhone, we have grown weary not with the unexpected
cool stuff that the generative PC had produced, but instead with the
unexpected very uncool stuff that came along with it. Viruses, spam, identity
theft, crashes: all of these were the consequences of a certain freedom built into
the generative PC. As these problems grow worse, for many the promise of security
is enough reason to give up that freedom.

* * *

In the arc from the Apple II to the iPhone, we learn something important about
where the Internet has been, and something more important about where it is
going. The PC revolution was launched with PCs that invited innovation by
others. So too with the Internet. Both were generative: they were designed to
accept any contribution that followed a basic set of rules (either coded for a
particular operating system, or respecting the protocols of the Internet). Both
overwhelmed their respective proprietary, non-generative competitors, such as
the makers of stand-alone word processors and proprietary online services like
CompuServe and AOL. But the future unfolding right now is very different
from this past. The future is not one of generative PCs attached to a generative
network. It is instead one of sterile appliances tethered to a network of control.
These appliances take the innovations already created by Internet users and
package them neatly and compellingly, which is good—but only if the Internet
and PC can remain sufficiently central in the digital ecosystem to compete with
locked-down appliances and facilitate the next round of innovations. The balance
between the two spheres is precarious, and it is slipping toward the safer
appliance. For example, Microsoft’s Xbox 360 video game console is a powerful
computer, but, unlike Microsoft’s Windows operating system for PCs, it does
not allow just anyone to write software that can run on it. Bill Gates sees the
Xbox as at the center of the future digital ecosystem, rather than at its periphery:

“It is a general purpose computer. . . . [W]e wouldn’t have done it if it was
just a gaming device. We wouldn’t have gotten into the category at all. It was
about strategically being in the living room. . . . [T]his is not some big secret.
Sony says the same things.”7

It is not easy to imagine the PC going extinct, and taking with it the possibility
of allowing outside code to run—code that is the original source of so
much of what we find useful about the Internet. But along with the rise of information
appliances that package those useful activities without readily allowing
new ones, there is the increasing lockdown of the PC itself. PCs may not be
competing with information appliances so much as they are becoming them.
The trend is starting in schools, libraries, cyber cafés, and offices, where the
users of PCs are not their owners. The owners’ interests in maintaining stable
computing environments are naturally aligned with technologies that tame the
wildness of the Internet and PC, at the expense of valuable activities their users
might otherwise discover.

The need for stability is growing. Today’s viruses and spyware are not merely
annoyances to be ignored as one might tune out loud conversations at nearby
tables in a restaurant. They will not be fixed by some new round of patches to
bug-filled PC operating systems, or by abandoning now-ubiquitous Windows
for Mac. Rather, they pose a fundamental dilemma: as long as people control
the code that runs on their machines, they can make mistakes and be tricked
into running dangerous code. As more people use PCs and make them more
accessible to the outside world through broadband, the value of corrupting
these users’ decisions is increasing. That value is derived from stealing people’s
attention, PC processing cycles, network bandwidth, or online preferences.
And the fact that a Web page can be and often is rendered on the fly by drawing
upon hundreds of different sources scattered across the Net—a page may pull
in content from its owner, advertisements from a syndicate, and links from various
other feeds—means that bad code can infect huge swaths of the Web in a
heartbeat.

If security problems worsen and fear spreads, rank-and-file users will not be
far behind in preferring some form of lockdown—and regulators will speed the
process along. In turn, that lockdown opens the door to new forms of regulatory
surveillance and control. We have some hints of what that can look like.
Enterprising law enforcement officers have been able to eavesdrop on occupants
of motor vehicles equipped with the latest travel assistance systems by
producing secret warrants and flicking a distant switch. They can turn a standard
mobile phone into a roving microphone—whether or not it is being used
for a call. As these opportunities arise in places under the rule of law—where
some might welcome them—they also arise within technology-embracing authoritarian
states, because the technology is exported.

A lockdown on PCs and a corresponding rise of tethered appliances will
eliminate what today we take for granted: a world where mainstream technology
can be influenced, even revolutionized, out of left field. Stopping this future
depends on some wisely developed and implemented locks, along with
new technologies and a community ethos that secures the keys to those locks
among groups with shared norms and a sense of public purpose, rather than in
the hands of a single gatekeeping entity, whether public or private.
The iPhone is a product of both fashion and fear. It boasts an undeniably attractive
aesthetic, and it bottles some of the best innovations from the PC and
Internet in a stable, controlled form. The PC and Internet were the engines of
those innovations, and if they can be saved, they will offer more. As time passes,
the brand names on each side will change. But the core battle will remain. It
will be fought through information appliances and Web 2.0 platforms like today’s
Facebook apps and Google Maps mash-ups. These are not just products
but also services, watched and updated according to the constant dictates of
their makers and those who can pressure them.

In this book I take up the question of what is likely to come next and what
we should do about it.






2008年5月25日 星期日

The Magnifying Glass Gets an Electronic Twist

加齢黄斑変性(かれいおうはんへんせい、Age-related Macular Degeneration: AMD)とは、加齢に伴い網膜にある黄斑部変性を起こす疾患である。失明の原因となりうる。

Novelties

The Magnifying Glass Gets an Electronic Twist

Noah Berger for The New York Times

James McCarthy, president of Freedom Vision, magnifies a label with the Quicklook Focus, a gadget intended to enlarge images for people with vision problems.


Published: May 25, 2008

PEOPLE who lose part of their sight to macular degeneration, diabetes or other diseases may now benefit from some new technology. Several portable video devices that enlarge print may help them make the most of their remaining vision.

Skip to next paragraph

The SenseView Duo is used to read a newspaper and map.

Swipe one of the devices over an airline ticket, or point it at a medicine bottle on a shelf, and all of the fine print is blown up and displayed in crisp letters on a screen.

Sturdy desktop video-based systems that magnify print have long been available, but lightweight, portable devices have become popular only in the past decade, as the size of consumer electronics products in general has shrunk. The new hand-held models typically weigh 9 ounces or less and can enlarge the print on closeby or more distant objects: users can pass the magnifier over a menu in a dimly lit restaurant, for example, or aim it at a grocery display on a store aisle.

The tiny, high-resolution video camera within the device captures the image, and the electronics bolster the contrast in the display, making it easier to read words on the monitor.

Dr. Bruce P. Rosenthal, chief of low-vision programs at Lighthouse International in Manhattan, which offers services for people with vision loss, said the portable magnifiers, with their built-in illumination and powerful electronics, have many advantages over traditional optical devices like magnifying glasses. “Optical devices can’t increase the contrast like these devices,” he said. “Loss in contrast causes as many problems as loss of visual acuity.”

Electronics in the new devices can make black print darker, or switch black lettering on white to white lettering on black — which some people with macular degeneration prefer.

Dr. Rosenthal said the devices could help people with low vision continue with their normal rounds — for instance, shopping in the supermarket or reading a prayer book at a religious service. “One of the concerns we have in working with the visually impaired is depression,” he said. The more that people can complete everyday activities like everyone else, he added, “the more they can cope and feel that their lives are no different than others.”

The devices have a substantial drawback, however, when compared with a $40 magnifying glass: They typically cost $700 to $1,300, and Medicare and most private insurance plans usually do not pay for them, said Robert McGillivray, low-vision specialist at the Carroll Center for the Blind in Newton, Mass.

“But if the devices get you back to work, or help you with your education, or increase your pleasure in reading,” he said, “it’s well worth considering them.”

The gadgets have a bigger area of view than a traditional magnifying glass and allow for far more flexibility in viewing an image, Mr. McGillivray said. And while the cost is typically not reimbursable, “if people are looking for a job, they may be eligible for vocational rehabilitation funds,” he said.

“State agencies might provide them with this type of product if it helps them get or retain a job,” he added.

One new portable device is the Quicklook Focus ($995), which weights 8.8 ounces. It has a camera head that sends digital video to the display, where the image is magnified, said Fergal Brennan, a design engineer at Ash Technologies outside of Dublin, the manufacturer. Users can pass the camera over a document they want to read, or hold it up at arm’s length to read the print on more distant objects.

The camera focuses electronically at the touch of a button and has a range of magnification starting at three times the original print size. The device runs off its own battery for four and a half hours at full power and for up to seven hours when the brightness is turned down, he said.

THE Quicklook Focus should be available by mid-June, said James McCarthy, president of Freedom Vision, the Mountain View, Calif.-based distributor for Ash in North America (www.freedomvision.net).

Another new device, the SenseView Duo ($1,299), available at the end of this month, has two cameras — one for close-up reading of text, the other for viewing objects eight feet or farther away, like classroom blackboards, said Doug Geoffray, co-owner of GW Micro, the Fort Wayne, Ind.-based distributor of the devices in North America (www.gwmicro.com). The product is made by the HIMS Company of South Korea.

The SenseView Duo, with a liquid crystal display 4.3 inches wide, stores up to 20 images — for instance, a screen shot of a railway timetable. Users can enlarge the image, then scan through it, moving up and down or left and right to read all the information.

Video magnification devices are valuable products in a world of often-frivolous consumer electronics, said Dr. Rosenthal of Lighthouse International.

“One of the objectives of this new technology is to improve the quality of life for people with low vision,” he said. “That’s exactly what these products are starting to do.”

E-mail: novelties@nytimes.com.

2008年5月21日 星期三

AVS, Audio Video Standard

中文
Audio Video Standard, or AVS, is a compression codec for digital audio and video, and is competing with H.264/AAC to potentially replace MPEG-2. Chinese companies own 90% of AVS patents. [1] The audio and video files have an .avs extension as a container format.

[edit] Overview
Development of AVS was initiated by the government of the People's Republic of China. Commercial success of the AVS standard would not only reduce China's royalty/licensing payments to foreign companies, it would presumably earn China's electronics industry recognition among the more established industries of the developed world, where China is still seen as an outlet for mass production with limited indigenous design capability.
In January 2005, the AVS workgroup submitted their draft report to the Information Industry Department (IID). On March 30, 2005, the first trial by the IID approved the video portion of the draft standard for a public showing time.
The dominant audio/video compression codecs, MPEG and VCEG, enjoy widespread use in consumer digital media devices, such as DVD players. Their usage requires Chinese manufacturers to pay substantial royalty fees to the mostly-foreign companies that hold patents on technology in those standards. For example, as of 2006, licenses ranging from $2.50 to $4 already make up about ten percent of the cost for a contract-manufactured DVD player unit.[2]
According to the state-run media, a key consideration of AVS was to reduce foreign dependence on core intellectual properties used in digital media technology. Proposed as a national standard in 2004, AVS had a targeted royalty of 1 RMB (or about $0.10 USD) per player. On 30 April 2005, AVS standard video officially passed the public show and became the national standard.
AVS is currently expected to be approved for the Chinese high-definition successor to the Enhanced Versatile Disc.
Open source implementations of an AVS video decoder can be found in the OpenAVS project and within the libavcodec library. The latter is integrated in some free video players like MPlayer, VLC or xine. xAVS is also an open source AVS encoder with a working decoder.
In September 2007 DVD-forum appointed a new standard for High definition media expressly meant for the Chinese market, the so-called CH-DVD. As the name suggests, this standard follows in the steps of HD DVD, being based on much of the same technology, but using a different form of data modulation and encryption for writing the date onto the disc. It also adds official support for AVS in addition to the audio and video codecs already supported by HD DVD. It's unknown at this point whether CH-DVD players will include HD DVD support, but current HD DVD players won't be able to play CH-DVDs as they don't support these differences.

[edit] References
^ AVS official web site
^ Taiwan joins Chinese effort on proprietary DVD format

[edit] External links
AVS homepage
AVS sample clips 【經濟日報╱記者陳信榮】
2008.05.22 02:48 am

AVS(Audio Video coding Standard)為中國大陸自行研發、擁有專利權的最新數位影音視頻編碼標準。
有別於目前歐美國家採行的視訊轉換編碼標準H.264、MPEG-2、MPEG-4,AVS是大陸自行研發的規格,並已獲國家採用,不但在權利金收取上較為便宜,在視訊編碼與解碼效率上,也較歐美標準為佳,加上國家的政策背書,大陸數位化將全面展開。
1994年問世的MPEG-2,是第一代視訊轉換編碼標準,MPEG-4、H.264與AVS為第二代標準。從編碼效率比較,MPEG-4是MPEG-2的1.4倍;AVS和H.264相當,都是MPEG-2兩倍以上。
根據中國AVS產業聯盟理事長王國中指出,AVS是大陸數位電視、高密度光碟、視訊電話、視頻會議系統、數位監控系統、移動視頻通信等產業的核心技術。

Diesel Automobiles Clean Up for an Encore

Volkswagen says it will be the first to market with diesels clean enough to pass muster in every state. Jetta TDI sedans and wagons are due to arrive in August. More Photos >

By LAWRENCE ULRICH

Published: May 18, 2008
AFTER years in the automotive wilderness, largely exiled to the smoky borders of truck stops, diesel is coming home. Americans may not recognize its freshly scrubbed face.
Skip to next paragraph

Pros
Mileage is 25 percent to 40 percent higher than gasoline.
Carbon dioxide emissions are lower.
Highway mileage and performance are better than hybrids'.
High torque is well suited to large pickups and S.U.V.'s.
Extended driving range means less frequent fill-ups.
Engines are robust, often lasting 300,000 miles or more.
Cons
Engines and emissions systems can be costly.
Diesel fuel currently costs far more than gasoline.
Like gasoline, diesel is a petroleum product from foreign suppliers.
Though outdated, image as a dirty technology lingers.
Only 42 percent of American filling stations have diesel pumps.
Some companies’ latest emissions controls require refills of urea.
Multimedia
Slide Show
Coming Soon: New Diesels
RelatedTimes Topics: Diesel Vehicles
Enlarge This Image
Paul Sancya/Associated Press
The Audi R8 V-12 TDI concept car demonstrates the high-performance potential of diesel engines. More Photos »



A 19th-century invention by Rudolf Diesel, the diesel engine has always been known for outstanding fuel efficiency, with better mileage (by 25 percent to 40 percent) than gasoline. But the kerosenelike fuel and the engines that burn it were dirty, noisy, dawdling and even deadly, linked to increased risk of cancer and respiratory disease.
That has all changed, in part because of cleaner-burning fuel — its 2006 rollout had been mandated in 2000 by the Clinton administration — that has 97 percent less of the sulfur responsible for diesel engines’ sooty particulates.

The low-sulfur fuel, hailed by the Environmental Protection Agency as a historic advance, has opened the door to sophisticated emissions controls that let diesel engines meet the strict pollution standards of California. Those rules, the world’s most stringent by far, require 2009-model diesels to be as green as gasoline or even hybrid models.

In the meantime, advances like turbocharging and high-pressure fuel injection have transformed diesel cars from soot-belching slowpokes with a telltale clickety-clack sound to smooth, tidy and powerful machines that many Americans would have a hard time distinguishing from gasoline models.

With technical and environmental hurdles overcome — and facing tougher mileage standards that call for a 35 m.p.g. average by 2020 — automakers are rushing in with clean-diesel cars.
Two sets of emissions rules — a very strict set for California and four other states, another for the remaining 45 states — had kept most diesel cars out of the United States until now. In contrast, fuel-sipping diesels were embraced in Europe, where they account for half of passenger car sales.

But starting with the 2009 model year, several automakers have developed diesels clean enough to pass muster in all states, including — at last — the big California and New York markets.

Volkswagen says it will be the first to market, with Jetta sedans and wagons arriving in August. Mercedes will follow in October with diesel versions of its GL-, ML- and R-Class sport crossover utilities. BMW is preparing a mighty twin-turbo 6-cylinder diesel for sale this fall in the 335d sedan and X5 35d sport wagon.

Audi’s Q7 3.0 TDI utility wagon goes on sale early next year. That automaker has been vividly demonstrating modern diesel’s one-two punch by dominating recent runnings of the 24 Hours of Le Mans with its R10 racers, which are not only fast, but are the quietest, cleanest and most fuel-efficient cars in the field.

The new diesel disciples are not just the usual German suspects. Three Japanese companies — Honda, Nissan and Subaru — are ramping up the technology. Long known for efficient gasoline engines, Honda will offer its first American diesel next year, as an option on the Acura TSX sedan. A similar diesel Honda from Europe that I recently tested achieved a wallet-friendly 53 m.p.g. on the highway.

Honda also plans to offer a diesel V-6 around 2010 that may find its way into the Acura TL sedan, the Acura MDX utility or the Honda Odyssey minivan.

Nissan will install a Renault-designed diesel in its Maxima sedan for 2010; Subaru will counter with a diesel the same year, probably in a Legacy sedan or Outback wagon. A Jeep Grand Cherokee diesel arrives in 2009, and General Motors, Ford and Dodge all plan 50-state diesel versions of their light-duty pickup trucks in 2009 or 2010.

The situation seems to defy the conventional wisdom that saw diesel cars heading to history’s scrapyard. As late as 1982, Mercedes relied on diesels for 80 percent of its American sales. But aside from their strong presence in heavy-duty trucks, diesels have been relegated to a small but loyal fringe.

The diesel revival takes its cues from Europe, where the engines power everything from tiny microcars to luxurious autobahn cruisers. Strikingly, hybrids have grabbed less than 1 percent of the European market. Yet automakers acknowledge that mending diesel’s foul reputation in the United States remains an enormous challenge.

Johan de Nysschen, executive vice president at Audi of America, estimates that diesels might eventually account for 15 percent of Audis sold here. But first, he said, Americans must learn that modern diesels are not only clean and fun to drive, but more efficient than hybrids for many consumers.

“In stop-and-go city driving like Manhattan, the hybrid is a good solution,” Mr. de Nysschen said at the New York auto show this spring. “But we need to convey the message that hybrids are not the definitive solution.”

Under the hood, there is little to distinguish diesel engines from those that burn gasoline. Both use pistons, valves and electronic fuel injection, but the differences go beyond the form of petroleum that goes in the tank. Today’s gasoline engines ignite their fuel with a high-voltage spark; diesels, also known as compression-ignition engines, light the fire with the heat generated by squeezing the air in the cylinders to a far greater degree. This is one of their main advantages: a compression ratio of nearly 20:1, compared with a maximum of about 12:1 for gasoline. This means that diesel engines extract more power from their fuel.

The compression of a gasoline engine can’t simply be cranked up higher — the gasoline would burn erratically. Diesel fuel, a petroleum distillate, will tolerate those high cylinder pressures.
Another reason diesels get better mileage: the fuel contains 12 percent more energy a gallon.
Largely because they burn less fuel, the engines produce up to a third less carbon dioxide than gasoline models — compelling some environmentalists to reverse their longstanding opposition. Diesel’s drawback had been high levels of smog-forming nitrogen oxides and carcinogenic soot.
The greening of diesel involves the new ultra-low-sulfur fuel, cleaner-burning engines and a suite of emissions equipment.

Filters trap sooty particulates while catalysts use ammonia to convert nitrogen oxides into harmless nitrogen and water in the exhaust.

“There’s a little chemical processing plant in there, and some pretty amazing chemistry,” said Thomas Hinman, vice president for diesel technologies at Corning, a leading supplier of cellular ceramic filters for diesel engines.

For many models, including those from BMW, Mercedes and Audi, there is a catch: their S.U.V.’s will carry six- to eight-gallon tanks of urea, an ammonia-rich solution injected into the exhaust to neutralize smog-forming pollution.


And to ensure that consumers don’t let the urea run dry, Mercedes is installing a dashboard alert that warns consumers when the urea level drops below one gallon. From there, owners will be on a countdown until the tank is topped off: the cars will start just 20 more times before they cannot be operated. That countdown is a concession to federal regulators, who demanded technical assurances that these groundbreaking systems would work continuously to keep emissions below legal levels.

The smaller 4-cylinder VW and Honda diesels, in contrast, meet 50-state standards without requiring urea tanks that would have to be replenished every 12,000 miles or so.
Yet as automakers dress up diesel for its coming-out party, one unexpected development is threatening to spoil it. For decades, diesel fuel cost less than gasoline, amplifying the advantage of its higher mileage. But over the last year, diesel has soared to a record average of $4.33 a gallon nationwide, compared with $3.72 for regular gasoline.

George Peterson, vice president of the AutoPacific consulting firm, said that diesel cars traditionally offset their higher prices through both fuel savings and higher resale value. But higher-price diesel fuel puts both those financial incentives at risk.

“Given the price of diesel, you can’t get the cars to pay you back, so it doesn’t make as much sense,” he said.

While diesel currently costs 16 percent more than gasoline, that premium is more than offset by mileage gains of 25 to 40 percent. Consumers would still save money with a diesel car, and they would fill it less frequently.

The Mercedes E320 diesel sedan, for example, can cover roughly 700 highway miles on a tank. Clean-diesel models may also become eligible for federal tax credits of up to $3,400.

Consumers will also pay more for diesel technology, with manufacturers estimating that diesel engines and emissions gear add from $1,500 to $3,500 to their costs for each car. Mercedes is charging only $1,000 extra for its diesel models, compared with the equivalent gasoline versions, though some analysts suggest that Mercedes is partly subsidizing diesels to win converts. Steve Keyes, a spokesman for Volkswagen, said the Jetta diesel sedan and sport wagon would cost less than $2,000 over the gas versions, a price that he said would cover the additional costs.

Automakers also note major differences between European and American markets. European nations have long subsidized diesel by taxing gasoline at higher rates. Additional taxes on large engines also drove consumers into small but relatively powerful diesels.

Finally, diesel isn’t as widely available as gasoline, though 42 percent of service stations nationwide offer the fuel, according to the Diesel Technology Forum, a trade group.

Many analysts expect diesels to blow past hybrids in popularity. J. D. Power & Associates estimates that diesel will explode from its 3 percent market share to 11.5 percent by 2015, exceeding hybrids at 7 percent. Continued high diesel prices could force an adjustment in that projection.

“People will definitely get sticker shock at over $4 a gallon,” said Mike Omotoso, the powertrain analyst at J. D. Power. “But we see the huge price gap between gasoline and diesel as a relatively short-term spike.”

And as the industry hedges its bets on which fuels and technologies — including gas-electric hybrids, diesels, plug-in hybrids and ethanol — will catch on, some automakers are publicly at odds over diesel’s chances.

General Motors’ advanced propulsion strategy is to develop a full range of alternatives to gasoline, including hybrids, ethanol, hydrogen and its Chevrolet Volt plug-in electric car. As part of that strategy, G.M. is developing new diesels, including a 4.5-liter V-8 that it will offer on 2010 models of the Chevy Silverado and GMC Sierra pickups.

Yet while GM is already selling 1.3 million diesel models a year worldwide — and is readying a diesel-powered Cadillac CTS for Europe — it sees diesel’s American future in pickups and S.U.V.s, not in affordable cars.

G.M. engineers say diesel can raise the mileage of a trailer-towing truck by 70 percent, making it a smart buy. But, they say, for a gasoline car that already gets 35 m.p.g., diesel’s gains don’t justify the added costs.

Some automakers prefer to squeeze higher mileage from gasoline-burning engines without the expense of diesel engines and emissions gear. “There’s no question that we can get to the 35 m.p.g. standard with gasoline,” said John Krafcik, Hyundai’s vice president for product development.

As technologies vie for supremacy, the diesel-versus-hybrid debate has been especially fierce. But diesel devotees don’t have to be hybrid haters, or vice versa. With petroleum expected to dominate the automotive landscape for several more decades, the hybrids and diesels that burn it are central technologies in the transition to alternative fuels and the drive against global warming.

As if to prove the point, some automakers are marrying diesel and hybrid for the best of both worlds. Mercedes has shown a diesel-hybrid prototype of its big S-Class sedan that the company estimates would achieve 44 m.p.g. VW has shown a 69 m.p.g. diesel-hybrid Golf, though Mr. Keyes said the technology was years away from production.

Johannes-Joerg Rueger, vice president for diesel engineering at Robert Bosch, a major manufacturer of diesel systems, said: “If you’re looking at the carbon dioxide and mileage goals that have to be met, it doesn’t really matter whether it’s diesel or hybrid. Let the consumer choose.”

2008年5月20日 星期二

惠普欲向IBM看齊

WSJ
2008年05月14日15:04

在公司所做的事情中﹐很少有比進行大規模併購更受重視的。因為那通常意味著孤注一擲﹐要耗費大量的時間和資金進行通盤考慮。因此﹐沒有什麼事情能比併購提議未能成功更糟糕的了。不過其中也有一些交易最終還是達成了。

惠普希望收購EDS後服務在它的產品組合中佔更大的比例惠普公司(Hewlett-Packard Co.)準備再次努力﹐讓自己不那麼像戴爾(Dell)這個硬件公司﹐而是更像國際商業機器公司(IBM)﹐後者很大一部分利潤來自其提供的建議。惠普週二表示﹐將斥資約132.5億美元收購羅斯•佩羅特(Ross Perot)上世紀60年代早期創立的技術服務類公司電子資訊系統(Electronic Data Systems Corp., EDS)。IBM將自身改造為一家服務與咨詢公司﹐從而避開了電腦硬件行業的起起落落﹐惠普對此眼紅不已。

EDS是最早的電腦外包商﹔曾在IBM做過銷售的佩羅特對政府機構和私營公司在運行難以駕馭的主機時會遇到哪些問題有切身體會﹐他創立EDS就是為了替它們解決這些問題。許多公司後來都會改頭換面﹐EDS現在從事的是通常利潤豐厚的業務:幫助全球的公司制訂和實施技術戰略。如果你是惠普﹐大部分利潤要靠打印機墨盒或個人電腦來實現﹐上述業務就顯得尤為適合了。

提出收購EDS的交易對惠普來說可謂好事多磨。在互聯網泡沫最熱的時候﹐惠普提出收購普華永道(PriceWaterhouseCoopers)﹐報價比現在給EDS的價錢高得多。但那樁交易沒有成功﹐這樣一來就為收購康柏電腦公司(Compaq Computer Corp.)創造了條件﹐這樁交易成就了惠普的卡麗•費奧瑞娜(Carly Fiorina)時代。收購康柏通常被視為偉大的成功﹔或者﹐至少惠普得到了很高的評價﹐因為它輕鬆吞併了康柏﹐並在過去幾年中成為全球個人電腦領軍者。當然﹐戴爾也幫了它不少忙﹐選在差不多同一時期出現失誤。惠普巧妙地處理了收購康柏的交易﹐證明了對該交易持批評態度的許多人大錯特錯﹐當然也有助於平息惠普董事會內部可能會出現的反對收購EDS的意見。


惠普首席執行長馬克•赫德(Mark Hurd)被認為是穩扎穩打的類型﹐對大膽的、戲劇性的行動沒什麼興趣。雖然如此﹐收購EDS這種大型交易的風險還是相當大。一方面﹐兩家公司的特點迥然不同:一家是一絲不苟的得克薩斯公司﹐另一家則是更為散漫的硅谷企業﹐二者的融合需要時間。就跟任何一樁婚姻一樣﹐近看的時候﹐身邊的伴侶就有可能顯出缺點和瑕疵﹐而在戀愛期間﹐這些缺點都被掩蓋起來﹐或者沒有被注意到。而更嚴重的風險在於﹐交易實現後﹐其戰略合理性可能不復存在。比如說﹐操縱其他人的電腦可能不如先幫他們決定買什麼電腦來得吸引人。微軟(Microsoft Corp.)本月在最後時刻撤銷了收購雅虎公司(Yahoo Inc.)的提議﹐可能就是想到了這樣的問題。考慮到兩家公司只是因為幾美元的股價分歧就分道揚鑣﹐微軟決定抽身與其說是因為吝嗇﹐倒不如說是它終於意識到:如果它憑自己的力量無法戰勝谷歌(Google Inc.)﹐那加上雅虎運氣也不會好多少。大多數局外人﹐甚至還有許多微軟管理人員﹐都表示過這樣的看法。

雅虎投資者的想法似乎有所不同。雅虎股價目前在24至27美元之間﹐遠遠高於微軟首次提出收購意圖時的19美元。顯然﹐雅虎股東認為微軟將再次提出收購﹐而他們不想錯過這個機會。但現在出現了一種無法擺脫的困境﹐因為微軟不太可能再次嘗試﹐如果真要出手的話﹐除非雅虎的股價先降下來。赫德考慮收購EDS的交易時肯定會注意微軟和雅虎之間的這場大戲。如果他想用科技業收購的噩夢來嚇唬自己﹐他或許已經回憶起2005年eBay Inc.以26億美元收購Skype的交易﹐後者是一個免費的網絡電話軟件﹐擁有數百萬用戶﹐但幾乎沒有收入。EBay收購Skype是因為投資者往往像要去睡覺的小孩子﹐聽個故事就會感覺好一些。Ebay當時正因銷售收入平平而遭到指責﹐需要向華爾街展示它仍然擁有必要的技術實力﹐可以適應變化多端的互聯網行業。Ebay給分析師們講的故事是:可以用Skype來讓eBay買家在電腦上與賣家交談。這個故事跟許多故事一樣是個童話﹔有問題的買家通常會直接發封電子郵件﹐而如果eBay希望他們進行交談﹐它可以補貼買賣雙方的電話費用﹐比收購Skype成本低得多。EBay後來沖減了Skype的大部分收購價﹐還有傳言稱Ebay在為這部分業務尋找買家。正是這樣的故事讓科技公司的聯姻像好萊塢明星的分分合合一樣令人捉摸不定。跟冷冰冰的電話或熱哄哄的電腦比起來﹐EDS提供的專業信息技術服務簡直就跟﹐嗯﹐專業IT服務一樣令人興奮。但是﹐惠普拋出的繡球讓EDS變得有意思起來。Lee Gomes(編者按﹕本文作者Lee Gomes是《華爾街日報》專欄“Portals”的專欄作家﹐欄目內容以科技、商業及相關的主題為主。)

MEMS

(engineering) A system in which micromechanisms are coupled with microelectronics, most commonly fabricated as microsensors or microactuators. Abbreviated MEMS. Also known as microsystem.


全球最大的矽代工服務商台灣台積電(TSMC)公佈了即將啟動的全新MEMS代工服務的詳情。可以說,該公司面對今後半導體元件與MEMS逐步融合的潮流,已提前做好了提供MEMS與CMOS融合的代工服務準備。該方案具有MEMS製造工藝平臺化的特徵。台積電MEMS程式主管Robert Tsai于5月16日,在《日經微器件》主辦的研討會“MEMS International 2008”上透露了上述內容。   

要充分利用在矽代工服務中培育的經驗,重點在於盡可能使設計和製造工藝的服務內容實現通用平臺化。TSMC此做法,使MEMS製造工藝能夠在一定範圍內實現標準化。目前,MEMS領域還沒有設計和製造工藝標準。因此,需要為各種產品單獨開發設計規則、製造工藝、封裝和測試方法。   

以推進設計的平臺化為例,在IP庫的建設方面,TSMC希望與多傢夥伴企業進行長期合作。並準備分別於2008年完成感測器和印表機噴頭、于2009年完成DMD(數字微鏡器件)和RF(射頻)元件的IP庫。   

製造工藝也將依製造製程進行模組化。其中包括玻璃底板工藝、犧牲層應用工藝、反射鏡成型工藝,鉸鏈結構成型工藝等。另外,內面曝光、矽深蝕刻、CVD/PVD、內面研磨、CMP等製程也將列入模組化項目之中。   

MEMS代工服務將在位於新竹科學工業園區的“Fab 2”和“Fab 3”實施。其中,Fab 2為支援150mm晶圓的生產線,Fab 3為支援200mm晶圓的生產線。

■日文原文 TSMCがMEMSファウンドリの詳細を明らかに

2008年5月19日 星期一

cloud computing (ibm)

Special Features:
Catching Up on Cloud: An Interview with IBM

If there's one company trying to scatter cloud computing across the planet -- like some kind of big, blue cloud, you might say -- it's IBM. And one of the top guys behind IBM's cloud initiatives is Dennis Quan, chief technology officer for High-Performance On-Demand Solutions. The HiPODS team is charged with helping customers smartly grow and manage their datacenters, accelerate time to market and reduce IT complexity, among other things. In this Q&A with GRIDtoday, Quan gave us 29 minutes to recap some of the high points of IBM's busy past six months or so in cloud computing. Then he had to catch a plane.

---GRIDtoday: When customers ask you to explain cloud computing, what do you tell them?
DENNIS QUAN: Cloud computing is about providing applications to large numbers of users via the network and their connected mobile devices, their laptop computers or whatever, and having an IT infrastructure, a cloud computing center, that's capable of supporting large numbers of applications, and being able to massively scale to meet growing user demands. We've actually been able to prove out these concepts in some projects in use within IBM. And we've found that one of the key characteristics is not only scaling to demands, but being able to get applications on board and running as quickly as possible. This is critical for this generation of applications because of the innovation cycle being so fast today. You really need to get innovators the compute resources they need, and that's one of our goals with our on-demand solutions.
One message that resonates well with customers is being able to increase the speed at which they can prototype their apps and get feedback. Because we're using virtualization and provisioning automation, we're able to allow people to go to a self-service portal and say "I need these Linux boxes with an app server," and so on, and be able to get that done within minutes. You know, as opposed to taking potentially weeks to acquire the machines and set them up and rack them and install the software themselves. That is extremely appealing to our customers. They also like having the freedom to run any kind of workload they like and put any kind of software on the cloud that they like. It's not limited to back-end tasks, batch-processing tasks; it could be user-facing applications, it could be Web servers, it could be database servers.


Gt: What's one of your most compelling examples of cloud computing?

QUAN: About two and a half years ago we launched a cloud within IBM, the Innovation Portal. When individual users at IBM had a new idea -- instead of having to hunt down permission to buy a new machine and having to find a place to host it and install it, etc., which is not only time-consuming but also resource- consuming because they have to handle all the system admin stuff -- what they can do instead is go to this self-service portal and request resources. It can be 20 virtual machines running Linux and WebSphere and DB2, for example, or any number of combinations. It's a lot like booking a hotel room on a Web site. You're able to get access to a certain number of resources for a period of time, and the system goes off and provisions that for you, and you're given access to those machines, with all the software and middleware, etc., set up for you.

Since launching this cloud within IBM, we've had over 100 projects run on it, and about 20 percent have contributed to technology used in shipping products.
The applications and projects we've run in the cloud have ranged from collaboration tools to social networking tools to development tools -- and even a game. What we've found is that people are able to access compute resources very quickly, which benefits not just individual innovators with a brand new idea but also design teams who want to test their new product on the greater IBM population. A software development company could use this kind of cloud to do in-house testing and quality control.

Gt: What software is at the heart of this cloud?

QUAN: We put together this cloud solution based on our Tivoli products -- Tivoli Provision Manager, Tivoli Monitoring -- and that's really been the foundation architecture that we've been using in all our explorations of cloud computing.

Gt: What is the goal of the joint venture with Google announced last fall?

QUAN: It's a partnership to promote research into cloud computing, especially to promote the parallel programming models we think are going to be important for future applications to take advantage of these large cloud centers. We've built out three clouds for this project. One at the Almaden Research Center in San Jose (California), one at the University of Washington, and one at a Google datacenter. We've been able to get six universities involved with this project (MIT, University of Maryland, Carnegie Mellon University, Stanford University, University of Washington, University of California, Berkeley). Overall it's about a thousand machines across the three sites, using the Tivoli architecture mentioned earlier.

Gt: November's Blue Cloud announcement of "ready-to-use cloud computing": What does it mean for businesses, and what's happened with the initiative since then?

QUAN: Blue Cloud is really a statement about everything we've learned about running clouds for innovation enablement inside IBM, or for supporting these new parallel programming models. It's about how we're going to apply these new technologies to solve some of the nagging pain points of many of our customers. They're facing trouble trying to grow their datacenters in the face of rising energy costs and running out of space, etc.
Blue Cloud is about having a broad spectrum of products across our systems and software technologies, our services, to support a cloud computing style of datacenter management. It's really all about having the massively scalable datacenter model that will allow you to support large numbers of applications and users, and a very diverse range of applications and workloads.
What we've done is put together an offering -- which we project will come out this spring -- that will allow our customers to be able to start up a cloud center of their own within their datacenters, within their own four walls. We've found that a number of our customers are very interested in this kind of highly scalable, manageable form of large-scale computing, but want to be able to maintain control of their own datacenters. So we are commercializing what we've learned in the clouds that we've built so far.
We want to be the arms dealers, as it were, of these cloud computing components that will enable our customers to build up datacenters that have these capabilities.

Gt: In February, IBM announced it will build the first cloud computing center in China, at the new Wuxi Tai Hu New Town Science and Education Industrial Park. Who will be using the Wuxi cloud, what for, and why?

QUAN: We've engaged with the municipal government of Wuxi, a city north of Shanghai, to build them a cloud for software development. There are cities all over China that want to create software parks, entrepreneurs who want to do enterprise software development for multinationals, or do a variety of things like animation -- things that involve a large amount of compute resources. They can really benefit from access to scalable resources on demand. So, we're building them a cloud center that includes a wide range of our Rational tools for developing enterprise applications.

An entrepreneur making use of this cloud can go to the self-service portal and say, "I am going to be doing this project for 10 months and I'm going to need resources for my 20 or 40 developers to be able to do source control and project management," and they'll have the appropriate products provisioned for them. The government will be able to bill them on a monthly basis, or whatever the schedule happens to be. The big benefit to these small entrepreneurs is that upfront costs of buying hardware and software licenses, as well as the ongoing maintenance, have all been centralized and borne by the government, and so the software company is able to pay as they go to make use of this.

We've seen that as a very popular model for governments not just in China but around the world who are interested in promoting economic development, entrepreneurialism, and innovation.

Gt: Then a month later, IBM opens a new cloud center in Dublin.
QUAN: We partnered with the local government industrial development authority. This particular cloud is run out of an IBM center. We're able to use this center to demonstrate to clients the benefits of cloud computing, especially for enterprises.
One of the highlights of the Dublin cloud is a solution we call the Idea Factory for Cloud Computing. It's a Web 2.0 app that allows you to exchange ideas using different collaboration tools like blogging and wikis and such. One customer's consultants had an idea exchange session a couple weeks ago with thousands of participants. They've been happy with what they can do using an application being delivered from a cloud computing center. We've had similar experiences with a wide range of institutions, including governments and financial services.

Gt: What does cloud technology mean for tomorrow's datacenter?

QUAN: We see cloud computing as a broadly applicable technology platform for enabling the next generation of datacenter, what we call the new enterprise datacenter. [This type of datacenter will be] able to combine the things that you see in the Web-centered cloud platforms out there -- the MySpaces, the Flickrs, the YouTubes -- where they're able to do large scalable application delivery and support large numbers of users with the characteristics of traditional enterprise datacenters, where large companies are able to depend on these datacenters for mission-critical applications, being able to do secure transaction processing, being able to maintain security and isolation of data. The new enterprise datacenter model is inspired by this Web-centered cloud concept and also inherits the characteristics of enterprise datacenters that our clients find absolutely critical.

Gt: What kind of response do you get from management and IT people when talking about bringing a cloud into their business?
QUAN: I think the way we've talked about cloud computing tends to resonate very well. They might have concerns based on things they read in the press and hear from other folks in industry, but at the end of the day, they care most about solving the problems that hamper them. How are they going to get higher utilization out of their datacenters that are running out of space? How are they going to lower the labor costs or the maintenance costs of running large-scale systems? And that's really where we've targeted our solutions. It's by design trying to help along these axes. So using technologies like virtualization, provisioning, automation, it's really about taking what they've seen in terms of the benefits from Web-centric clouds and then applying it directly to the pain points that they have.

Gt: What are some of those pain points?
QUAN: One of the biggest is machine utilization. We've probably all seen the statistics about x86 datacenters having about 5 to 10 percent utilization across their systems. And then you talk about needing to grow those datacenters and you're running out of space, one of the things you want to do is make higher use of the machines you have. By using things like virtualization, we're able to improve utilization significantly. Virtualization has been an IBM specialty for ages.
Gt: Just a couple weeks ago, IBM introduced a new line of servers. Tell us about the iDataPlex systems and how they fit into the cloud environment.

QUAN: They're a great example of a hardware platform to support cloud systems. The iDataPlex systems allow for an extremely dense configuration of compute power in the rack. We think these Linux servers can be used to support an extremely large cloud datacenter. They double the number of systems that can fit in a rack, but use about 40 percent less power. We actually have some of these systems running within the cloud that we have in our laboratories, and they'll be used in our other cloud installations. They're a key part of the portfolio we're offering to companies to build out their clouds. The systems support all the characteristics needed for cloud computing: extremely dense, vast pools of compute power, virtualization capabilities, and so on.

Gt: We will refrain from asking you if the future of cloud computing is sunny ... so, how is it?

QUAN: The future is pretty bright because what we're seeing right now is such growth in mobile technologies and need for data access anywhere. More and more users are going to be signing on from more and more locations. In developing economies they'll be signing on mostly from mobile devices because of lack of traditional infrastructure. That's going to put extreme demands on datacenters to be able to scale and to process the large amounts of video, audio, and text that these users are contributing and sending to each other. You're going to need a cloud computing-style datacenter model to support these kinds of applications, and these phenomena are not restricted to consumer applications. You see these things happening within enterprises in terms of the types of collaboration or interactions people have within a business, for anything from supporting basic e-mail to sales processes, CRM, and line-of-business applications. All these things are going to undergo a transformation toward being mobile, supporting richer forms of Web 2.0 interactions, and being able to sustain lots of concurrent users.

These are all driving the need for scalable datacenter models, such as the ones we've been building with our cloud computing initiatives. And finally, clouds will respond to green initiatives to get the most out of the compute cycles that you have in these datacenters because of the rising cost of energy. We made an announcement about a month ago in collaboration with Ohio State and Georgia Tech about research we're doing with them on autonomic computing as applied to clouds. There are several areas we're looking at, like automating workload scheduling, more intelligent balancing of resources across a virtualized datacenter, and being able to do workload movement. These features would let a company shut down a portion of the datacenter if it's not in use in order to save electricity.

We've been showing these technologies to customers for a couple years now and they're now able to understand how those things apply to them. It's basically been taking off.

2008年5月15日 星期四

The latest processing chips require“parallel programming”

Software

Cores of the problem

May 14th 2008
From Economist.com

The latest processing chips require a new approach to writing software


Intel

COMPUTER makers talk a lot about a coming wave of software that will change the way people behave towards their machines. Rich three-dimensional virtual worlds and multimedia applications that mimic the experience of a live concert in a living room will, they say, become commonplace. But there is a problem. Although hardware makers are producing PCs, laptops and portable devices with ever increasing processing power, the software industry is falling behind in its capacity to write programs that can make use of all this power.

Everyone is familiar with how Intel, AMD and other chipmakers churn out faster and faster processors. But in the past few years the design of these chips has changed. Instead of making chips faster by making their components smaller and running them at higher speeds, makers have started building multiple processing engines, or “cores”, onto each chip. Each core can run at a lower speed, which requires less energy and produces less heat, and the overall number-crunching power of the chip continues to increase.

But this change requires programmers to write code that can split the processing tasks efficiently between the cores. Such “parallel programming” is a classic problem in computer science, but not enough programmers have mastered the necessary techniques. Even so, the chipmakers have no intention of slowing down. Dual-core and four-core chips are already available, and Intel plans to launch six-core chips later this year. Chips with even more cores will follow in 2009.

To help their colleagues in the software industry catch up, companies are dipping into their own funds. In March Microsoft and Intel teamed up to give $10m each to the University of California at Berkeley and the University of Illinois to finance work on parallel programming. At Berkeley, researchers will develop new types of software for computers and mobile devices. This could include a browser for mobile phones that can handle demanding video applications. Almost 50 researchers at Illinois will tackle similar projects.

ntel, Sun Microsystems, NVIDIA, AMD, HP and IBM are paying for a similar effort at Stanford University. A centre called the Pervasive Parallelism Lab has been created at Stanford to bring together software developers working on parallel programming. One of its first tasks is to create a programming framework that can be applied to virtual worlds, robotics and the analysis of vast amounts of scientific and financial data.

The virtual-world research will, in theory, produce online destinations with graphics and interactive capabilities as good as those from today's video-game consoles. The robotics research will try to create more life-like systems. The output of all three schools will be published under licences that enable others to build upon their work.

Similar gaps between the performance of processors and software have arisen in the past. Each time, the software industry has eventually caught up, thanks to better and more sophisticated programs. The hardware firms are hoping their grants will help programmers catch up once again, by spreading the load—just as their processors are supposed to do.