台灣電子書包一個五萬多元
2010年5月28日 星期五
台灣電子書包一個五萬多元/ $100 Laptop Projec
台灣電子書包一個五萬多元
2010年5月21日 星期五
SCIENTISTS CHALLENGE THINKING ABOUT LIFE AFTER TRANSFORMING CHEMICALS INTO LIVING ORGANISM
SCIENTISTS CHALLENGE THINKING ABOUT LIFE AFTER TRANSFORMING CHEMICALS INTO LIVING ORGANISM By Clive Cookson Science Editor
Scientists have turned inanimate chemicals into a living organism in an experiment that raises profound questions about the essence of life.Craig Venter, the US genomics pioneer, announced last night that scientists at his laboratories in Maryland and California had succeeded in their 15-year project to make the world's first “synthetic cells” – bacteria called Mycoplasma mycoides.“We have passed through a critical psychological barrier,” Dr Venter told the Financial Times.
“It has changed my own thinking, both scientifically and philosophically, about life and how it works.”The bacteria's genes were all constructed in the laboratory “from four bottles of chemicals on a chemical synthesiser, starting with information on a computer,” he said. The research – published online by the journal Science – was hailed as a landmark by many independent scientists and philosophers. Julian Savulescu, ethics professor at Oxford University, said: “This is a step towards . . . creation of living beings with capacities and natures that could never have naturally evolved.”
The synthetic bacteria have 14 “watermark sequences” attached to their genome – inert stretches of DNA added to distinguish them from their natural counterparts. They behaved and divided in lab dishes like natural bacteria. M mycoides was chosen as a simple microbe with which to develop and prove the technology.
It has no immediate application.But scientists at the J Craig Venter Institute and Synthetic Genomics, the company funding their research, intend to move on to more useful targets that may not exist in nature. They are particularly interested in designing algae that can capture carbon dioxide from the air and produce hydrocarbon fuels. Last year Synthetic Genomics signed a $600m agreement with ExxonMobil to make algal biofuels. “We have looked hard at natural algae and we can't find one that can make the fuels we want on the scales we need,” Dr Venter said.
美国科学家造出“合成细菌” 英国《金融时报》科学编辑 克莱夫·库克森 报道 科学家已把无生命的化学物质转化为有生命的有机体,这项实 验对生命的本质提出了意义深远的问题。美国基因组学先驱克莱格·凡特(Craig Venter)昨天宣布,在他位于马里兰州和加州的实验室,科研人员在其为期15年的研究项目中,已成功制造出全球首个“合成细胞”,一种称为丝状支原体 的细菌。“我们穿越了一道关键的心理障碍,”凡特对英国《金融时报》表示。“这在科学和哲学两个层面上改变了我自己对生命及其机理的思 索。”他表示,这一人造细菌的基因均在实验室内构建,“从电脑上的信息开始,使用四瓶化学物质和一台化学合成装置来构建的。”《科学》期刊在网上发表了这项研究。它被许多独立科学家和哲学家誉为一个标志性突破。英 国牛津大学(Oxford University)的伦理学教授朱利安·萨乌莱斯(Julian Savulescu)表示:“此举向创造具有能力的生物和构建永远不可能自然进化的‘自然界'迈出了一步。”这种合成的细菌有14个“水印 序列”附在其基因组上,添加这些惰性的DNA延伸,是为了使其有别于同类的天然细菌。在培养皿中,合成细菌的行为(包括分裂)就像天然细菌一样。之所以选 择丝状支原体,是因为这是一种简单的微生物,便于开发和验证相关技术。这种技术眼下没有实际用途。
但克莱格·凡特研究所(J Craig Venter Institute)和合成基因组公司(Synthetic Genomics)的科研人员有意向前推进,瞄准自然界可能不存在的更有用的目标。他们特别感兴趣的是,设计能够从空气中捕捉二氧化碳、然后产生碳氢化合 物燃料的藻类。合成基因组公司资助这项研究。
去年,合成基因组公司与埃克森美孚(ExxonMobil)签署了一项6亿美元的协议,旨在制 造藻类生物燃料。“我们对天然藻类进行了大量研究,但找不到能以我们需要的规模、产生我们想要的燃料的藻类,”凡特表示。
2010年5月18日 星期二
紐約時報每日科技
Creatures of Cambrian May Have Lived On
By JOHN NOBLE WILFORD
The discovery of 480-million-year-old fossils in Morocco supports the likelihood that the Cambrian Explosion’s diverse life continued to evolve.
A New Clue to Explain Existence
By DENNIS OVERBYE
New evidence could help clear up why the universe is composed of matter and not its opposite, antimatter.
FINDINGS
Doomsayers Beware, a Bright Future Beckons
By JOHN TIERNEY
While schools of despair await the end of modern civilization, the writer Matt Ridley expects less poverty and disease, and greater freedom and happiness.
TECHNOLOGY | |
At YouTube, Adolescence Begins at 5 By BRAD STONE Chad Hurley, YouTube’s chief executive, says that professional video is drawing ever larger and more engaged audiences to the Google subsidiary, a sign that the site is growing up. Filmmakers Tread Softly on Early Release to Cable By MICHAEL CIEPLY Studios now can block new films from being copied on pay-per-view systems, but they fear theaters’ reaction if they sell to cable before DVDs are out. What We're Reading: Strategy and Back Story By THE NEW YORK TIMES Our reading list includes items about an important engineer leaving Twitter, Microsoft focusing on scientific computing and Facebook privacy -- of course. | |
• More Technology News |
2010年5月13日 星期四
Battling the Cyber Warmongers
- ESSAY
- MAY 8, 2010
Battling the Cyber Warmongers
Cyberattacks are inevitable but the threat has been exaggerated by those with a vested interest
By EVGENY MOROZOV
A recent simulation of a devastating cyberattack on America was crying for a Bruce Willis lead: A series of mysterious attacks—probably sanctioned by China but traced to servers in the Russian city of Irkutsk—crippled much of the national infrastructure, including air traffic, financial markets and even basic email. If this was not bad enough, an unrelated electricity outage took down whatever remained of the already unplugged East Coast.
The simulation—funded by a number of major players in network security, organized by the Bipartisan Policy Center, a Washington-based think tank, and broadcast on CNN on a Saturday night—had an unexpected twist. The American government appeared incompetent, indecisive and confused (past government officials, including former Secretary of Homeland Security Michael Chertoff and former Deputy Secretary of State John Negroponte, were recruited to play this glamorous role on TV). "The U.S. is unprepared for cyberwar," the simulation's organizers grimly concluded.
The past few months have been packed with cyber-jingoism from former and current national security officials. Richard Clarke, a former cybersecurity adviser to two administrations, says in his new book that "cyberwar has already begun." Testifying in Congress in February, Mike McConnell, former head of the National Security Agency, argued that "if we went to war today in a cyberwar, we would lose." Speaking in late April, Director of Central Intelligence Leon Panetta said that "the next Pearl Harbor is likely to be a cyberattacking going after our grid."
The murky nature of recent attacks on Google—in which someone tricked a Google employee into opening a malicious link that eventually allowed intruders to access parts of Google's password-managing software, potentially compromising the security of several Chinese human rights activists—has only added to public fears. If the world's most innovative technology company cannot protect its computers from such digital aggression, what can we expect from the bureaucratic chimera that is the Department of Homeland Security?
Google should be applauded for going on the record about the cyber-attacks; most companies prefer to keep quiet about such incidents. But do hundreds—or even thousands—of such incidents that target both the private and the public sector add up to the imminent threat of a "cyberwar" that is worthy of such hype? The evidence so far looks too shaky.
Ironically, the more we spend on securing the Internet, the less secure we appear to feel. A 2009 report by Input, a marketing intelligence firm, projected that government spending on cybersecurity would grow at a compound rate of 8.1% in the next five years. A March report from consulting firm Market Research Media estimates that the government's total spending on cybersecurity between now and 2015 is set to hit $55 billion, with strong growth predicted in areas such as Internet-traffic surveillance and monitoring.
Given the previous history of excessively tight connections between our government and many of its contractors, it's quite possible that the over-dramatized rhetoric of those cheerleading the cyberwar has helped to add at least a few billion dollars to this price tag. Mr. McConnell's current employer, Booz Allen Hamilton, has just landed $34 million in cyber security contracts with the Air Force. In addition to writing books on the subject, Richard Clarke is a partner in a security firm, Good Harbor Consulting.
"The point we have made about cyberwar is that the U.S. has created a large and expensive cyberwar command, as have other nations. Thus, the government thinks cyberwar is possible no matter what the naysayers think," says Mr. Clarke in an email. Mr. Clarke says 90% of his firm's revenue in 2009 and 2010 to date comes from consulting unrelated to cybersecurity, and none of the proposals from his book would financially benefit Good Harbor. In a statement, Booz Allen Hamilton says of Mr. McConnell: "As director of national intelligence he delivered the same messages of concern about the vulnerability of our cyber-infrastructure to President George W. Bush and presidential candidate Barack Obama…As a longstanding intelligence professional, McConnell has an awareness across the full spectrum of classification,and sees it as his duty in public service to foster the right kind of discussion so the nation's leadership can debate and mitigate the risks."
Cyberspace's Big Bullies
From 'Dark Dante' to the originator of the Internet worm, a selection of notorious hackers.Mr. Poulsen, a hacker known as "Dark Dante," was indicted in 1990 for penetrating government and telephone company computer systems, including an Army computer network. He was also charged with illegally obtaining a complete set of secret flight orders for thousands of Army paratroopers who were on a military exercise in North Carolina. He went on to become a journalist.
The then-16-year-old Florida hacker was sentenced in 2000 to six months at a juvenile detention center for invading NASA and Pentagon computers. "Never again," he told the Miami Herald. "It's not worth it, because all of it was for fun and games, and they're putting me in jail for it. I don't want that to happen again. I can find other stuff for fun."
At the age of 21, Mr. Ancheta was sentenced in 2006 to nearly five years in federal prison for taking control of 400,000 Internet-connected computers and renting access to them to spammers and fellow hackers. Among the machines infected were those at the China Lake Naval Air Facility and the Defense Information System Agency in Falls Church, Va.
Both Messrs. McConnell and Clarke—as well as countless others who have made a successful transition from trying to fix the government's cyber security problems from within to offering their services to do the same from without—are highly respected professionals and their opinions should not be taken lightly, if only because they have seen more classified reports. Their stature, however, does not relieve them of the responsibility to provide some hard evidence to support their claims. We do not want to sleepwalk into a cyber-Katrina, but neither do we want to hold our policy-making hostage to the rhetorical ploys of better-informed government contractors.
Steven Walt, a professor of international politics at Harvard, believes that the nascent debate about cyberwar presents "a classical opportunity for threat inflation." Mr Walt points to the resemblance between our current deliberations about online security and the debate about nuclear arms during the Cold War. Back then, those working in weapons labs and the military tended to hold more alarmist views than many academic experts, arguably because the livelihoods of university professors did not depend on having to hype up the need for arms racing.
Markus Ranum, a veteran of the network security industry and a noted critic of the cyber war hype, points to another similarity with the Cold War. Today's hype, he says, leads us to believe that "we need to develop an offensive capability in order to defend against an attack that isn't coming—it's the old 'bomber gap' all over again: a flimsy excuse to militarize."
How dire is the threat? Ask two experts and you will get different opinions. Just last month, Lt. Gen. Keith Alexander, director of the NSA, told the Senate's Armed Services Committee that U.S. military networks were seeing "hundreds of thousands of probes a day." However, speaking at a March conference in San Francisco, Howard Schmidt, Obama's recently appointed cybersecurity czar, said that "there is no cyberwar," adding that it is "a terrible metaphor" and a "terrible concept."
The truth is, not surprisingly, somewhere in between. There is no doubt that the Internet brims with spamming, scamming and identity fraud. Having someone wipe out your hard drive or bank account has never been easier, and the tools for committing electronic mischief on your enemies are cheap and widely accessible.
This is the inevitable cost of democratizing access to multi-purpose technologies. Just as any blogger can now act like an Ed Murrow, so can any armchair-bound cyberwarrior act like the über-hacker Kevin Mitnick, who was once America's most-wanted computer criminal and now runs a security consulting firm. But just as it is wrong to conclude that the amateurization of media will bring on a renaissance of high-quality journalism, so it is wrong to conclude that the amateurization of cyberattacks will usher in a brave new world of destructive cyberwarfare.
In his Senate testimony—part of his confirmation process to head the Pentagon's new Cyber Command—
Gen. Alexander of the NSA explained those "hundreds of thousands of probes" could allow attackers to "scan the network to see what kind of operating system you have to facilitate…an attack." This may have scared our mostly technophobic senators but it's so vague that even some of the most basic attacks available via the Internet—including those organized by "script kiddies," or amateurs who use scripts and programs developed by professional hackers—fall under this category. Facing so many probes is often the reality of being connected to the Internet. The number of attacks is not a very meaningful indicator of the problem, especially in an era when virtually anyone can launch them.
From a strictly military perspective, "cyberwar"—with a small "c"—may very well exist, playing second fiddle to ongoing military conflict, the one with tanks, shellfire and all. The Internet—much like the possibility of air combat a century ago—has opened new possibilities for military operations: block the dictator's bank account or shut down his propaganda-infested broadcast media. Such options were already on the table—even though they appear to have been used sparingly— during a number of recent wars. Back in 1999, Gen. Wesley Clark, then the outgoing supreme allied commander in Europe, instilled American policy makers with high hopes when he said in Senate testimony that NATO could have "methods to isolate Milosevic and his political parties electronically," thus preventing "the use of the military instrument."
Why have such tactics—known in military parlance as "computer network attacks"—not been used more widely? As revolutionary as it is, the Internet does not make centuries-old laws of war obsolete or irrelevant. Military conventions, for example, require that attacks distinguish between civilian and military targets. In decentralized and interconnected cyberspace, this requirement is not so easy to satisfy: A cyberattack on a cellphone tower used by the adversary may affect civilian targets along with military ones. When in 2008 the U.S. military decided to dismantle a Saudi Internet forum—initially set up by the CIA to glean intelligence but increasingly used by the jihadists to plan on attacks in Iraq—it inadvertently caused disruption to more than 300 servers in Saudi Arabia, Germany and Texas. A weapon of surgical precision the Internet certainly isn't, and damage to civilians is hard to avoid. Military commanders do not want to be tried for war crimes, even if those crimes are committed online.
As Gen. Clark pointed out in 1999, cyberwarfare may one day give us a more humane way to fight wars (why, for example, bomb a train depot if you can just temporarily disable its computer networks?), so we shouldn't reject it out of hand. The main reason why this concept conjures strong negative connotations is because it is often lumped with all the other evil activities that take place online—cybercrime, cyberterrorism, cyber-espionage. Such lumping, however, obscures important differences. Cybercriminals are usually driven by profit, while cyberterrorists are driven by ideology. Cyber-spies want the networks to stay functional so that they can gather intelligence, while cyberwarriors—the pure type, those working on military operations—want to destroy them.
All of these distinct threats require quite distinct policy responses that can balance the risks with the levels of devastation. We probably want very strong protection against cyberterror, moderate protection against cybercrime, and little to no protection against juvenile cyber-hooliganism.
Perfect security—in cyberspace or in the real world—has huge political and social costs, and most democratic societies would find it undesirable. There may be no petty crime in North Korea, but achieving such "security" requires accepting all other demands of living in an Orwellian police state. Just like we don't put up armed guards to protect every city wall from graffiti, we should not overreact in cyberspace.
Recasting basic government problems in terms of a global cyber struggle won't make us any more secure. The real question is, "Why are government computers so vulnerable to very basic and unsophisticated threats?" This is not a question of national security; it is a question of basic government incompetence. Cyberwar is the new "dog ate my homework": It's far easier to blame everything on mysterious Chinese hackers than to embark on uncomfortable institutional soul-searching.
Thus, when a series of fairly unsophisticated attacks crashed the websites of 27 government agencies—including those of the Treasury Department, Secret Service and Transportation Department—during last year's July Fourth weekend, it was panic time. North Korea was immediately singled out as their likely source (websites of the South Korean government were also affected). But whoever was behind the attacks, it was not their sophistication or strength that crashed the government's websites. Network security firm Arbor Networks described the attacks as "pretty modest-sized." What crashed the websites was the incompetence of the people who ran them. If "pretty modest-sized" attacks can cripple them, someone is not doing their job.
What we do not want to do is turn "weapons of mass disruption"—as Barack Obama dubbed cyberattacks in 2009—into weapons of mass distraction, diverting national attention from more burning problems while promoting extremely costly solutions.
For example, a re-engineering of the Internet to make it easier to trace the location of cyberattackers, as some have called for, would surely be expensive, impractical and extremely harmful to privacy. If today's attacks are mostly anonymous, tomorrow they would be performed using hijacked and fully authenticated computers of old ladies.
What is worse, any major re-engineering of the Internet could derail other ambitious initiatives of the U.S. government, especially its efforts to promote Internet freedom. Urging China and Iran to keep their hands off the Internet would work only if Washington sticks to its own advice; otherwise, we are trading in hype.
In reality, we don't need to develop a new set of fancy all-powerful weaponry to secure cyberspace. In most cases the threats are the same as they were 20 years ago; we still need to patch security flaws, update anti-virus databases and ban suspicious users from our sites. It's human nature, not the Internet, that we need to conquer and re-engineer to feel more secure. But it's through rational deliberation, not fear-mongering, that we can devise policies that will accomplish this.
—Evgeny Morozov is a fellow at Georgetown University and a contributing editor to Foreign Policy. His book about the Internet and d2010年5月10日 星期一
HTML5
2010年4月29日,美國蘋果CEO史蒂夫·喬布斯(Steve Jobs)公開發表了指責美國阿道比系統(Adobe Systems)的“Flash”軟件技術的文章《我對Flash的看 法》(Thoughts on Flash)……
Apple has a long relationship with Adobe. In fact, we met Adobe’s founders when they were in their proverbial garage. Apple was their first big customer, adopting their Postscript language for our new Laserwriter printer. Apple invested in Adobe and owned around 20% of the company for many years. The two companies worked closely together to pioneer desktop publishing and there were many good times. Since that golden era, the companies have grown apart. Apple went through its near death experience, and Adobe was drawn to the corporate market with their Acrobat products. Today the two companies still work together to serve their joint creative customers – Mac users buy around half of Adobe’s Creative Suite products – but beyond that there are few joint interests.
I wanted to jot down some of our thoughts on Adobe’s Flash products so that customers and critics may better understand why we do not allow Flash on iPhones, iPods and iPads. Adobe has characterized our decision as being primarily business driven – they say we want to protect our App Store – but in reality it is based on technology issues. Adobe claims that we are a closed system, and that Flash is open, but in fact the opposite is true. Let me explain.
First, there’s “Open”.
Adobe’s Flash products are 100% proprietary. They are only available from Adobe, and Adobe has sole authority as to their future enhancement, pricing, etc. While Adobe’s Flash products are widely available, this does not mean they are open, since they are controlled entirely by Adobe and available only from Adobe. By almost any definition, Flash is a closed system.
Apple has many proprietary products too. Though the operating system for the iPhone, iPod and iPad is proprietary, we strongly believe that all standards pertaining to the web should be open. Rather than use Flash, Apple has adopted HTML5, CSS and JavaScript – all open standards. Apple’s mobile devices all ship with high performance, low power implementations of these open standards. HTML5, the new web standard that has been adopted by Apple, Google and many others, lets web developers create advanced graphics, typography, animations and transitions without relying on third party browser plug-ins (like Flash). HTML5 is completely open and controlled by a standards committee, of which Apple is a member.
Apple even creates open standards for the web. For example, Apple began with a small open source project and created WebKit, a complete open-source HTML5 rendering engine that is the heart of the Safari web browser used in all our products. WebKit has been widely adopted. Google uses it for Android’s browser, Palm uses it, Nokia uses it, and RIM (Blackberry) has announced they will use it too. Almost every smartphone web browser other than Microsoft’s uses WebKit. By making its WebKit technology open, Apple has set the standard for mobile web browsers.
Second, there’s the “full web”.
Adobe has repeatedly said that Apple mobile devices cannot access “the full web” because 75% of video on the web is in Flash. What they don’t say is that almost all this video is also available in a more modern format, H.264, and viewable on iPhones, iPods and iPads. YouTube, with an estimated 40% of the web’s video, shines in an app bundled on all Apple mobile devices, with the iPad offering perhaps the best YouTube discovery and viewing experience ever. Add to this video from Vimeo, Netflix, Facebook, ABC, CBS, CNN, MSNBC, Fox News, ESPN, NPR, Time, The New York Times, The Wall Street Journal, Sports Illustrated, People, National Geographic, and many, many others. iPhone, iPod and iPad users aren’t missing much video.
Another Adobe claim is that Apple devices cannot play Flash games. This is true. Fortunately, there are over 50,000 games and entertainment titles on the App Store, and many of them are free. There are more games and entertainment titles available for iPhone, iPod and iPad than for any other platform in the world.
Third, there’s reliability, security and performance.
Symantec recently highlighted Flash for having one of the worst security records in 2009. We also know first hand that Flash is the number one reason Macs crash. We have been working with Adobe to fix these problems, but they have persisted for several years now. We don’t want to reduce the reliability and security of our iPhones, iPods and iPads by adding Flash.
In addition, Flash has not performed well on mobile devices. We have routinely asked Adobe to show us Flash performing well on a mobile device, any mobile device, for a few years now. We have never seen it. Adobe publicly said that Flash would ship on a smartphone in early 2009, then the second half of 2009, then the first half of 2010, and now they say the second half of 2010. We think it will eventually ship, but we’re glad we didn’t hold our breath. Who knows how it will perform?
Fourth, there’s battery life.
To achieve long battery life when playing video, mobile devices must decode the video in hardware; decoding it in software uses too much power. Many of the chips used in modern mobile devices contain a decoder called H.264 – an industry standard that is used in every Blu-ray DVD player and has been adopted by Apple, Google (YouTube), Vimeo, Netflix and many other companies.
Although Flash has recently added support for H.264, the video on almost all Flash websites currently requires an older generation decoder that is not implemented in mobile chips and must be run in software. The difference is striking: on an iPhone, for example, H.264 videos play for up to 10 hours, while videos decoded in software play for less than 5 hours before the battery is fully drained.
When websites re-encode their videos using H.264, they can offer them without using Flash at all. They play perfectly in browsers like Apple’s Safari and Google’s Chrome without any plugins whatsoever, and look great on iPhones, iPods and iPads.
Fifth, there’s Touch.
Flash was designed for PCs using mice, not for touch screens using fingers. For example, many Flash websites rely on “rollovers”, which pop up menus or other elements when the mouse arrow hovers over a specific spot. Apple’s revolutionary multi-touch interface doesn’t use a mouse, and there is no concept of a rollover. Most Flash websites will need to be rewritten to support touch-based devices. If developers need to rewrite their Flash websites, why not use modern technologies like HTML5, CSS and JavaScript?
Even if iPhones, iPods and iPads ran Flash, it would not solve the problem that most Flash websites need to be rewritten to support touch-based devices.
Sixth, the most important reason.
Besides the fact that Flash is closed and proprietary, has major technical drawbacks, and doesn’t support touch based devices, there is an even more important reason we do not allow Flash on iPhones, iPods and iPads. We have discussed the downsides of using Flash to play video and interactive content from websites, but Adobe also wants developers to adopt Flash to create apps that run on our mobile devices.
We know from painful experience that letting a third party layer of software come between the platform and the developer ultimately results in sub-standard apps and hinders the enhancement and progress of the platform. If developers grow dependent on third party development libraries and tools, they can only take advantage of platform enhancements if and when the third party chooses to adopt the new features. We cannot be at the mercy of a third party deciding if and when they will make our enhancements available to our developers.
This becomes even worse if the third party is supplying a cross platform development tool. The third party may not adopt enhancements from one platform unless they are available on all of their supported platforms. Hence developers only have access to the lowest common denominator set of features. Again, we cannot accept an outcome where developers are blocked from using our innovations and enhancements because they are not available on our competitor’s platforms.
Flash is a cross platform development tool. It is not Adobe’s goal to help developers write the best iPhone, iPod and iPad apps. It is their goal to help developers write cross platform apps. And Adobe has been painfully slow to adopt enhancements to Apple’s platforms. For example, although Mac OS X has been shipping for almost 10 years now, Adobe just adopted it fully (Cocoa) two weeks ago when they shipped CS5. Adobe was the last major third party developer to fully adopt Mac OS X.
Our motivation is simple – we want to provide the most advanced and innovative platform to our developers, and we want them to stand directly on the shoulders of this platform and create the best apps the world has ever seen. We want to continually enhance the platform so developers can create even more amazing, powerful, fun and useful applications. Everyone wins – we sell more devices because we have the best apps, developers reach a wider and wider audience and customer base, and users are continually delighted by the best and broadest selection of apps on any platform.
Conclusions.
Flash was created during the PC era – for PCs and mice. Flash is a successful business for Adobe, and we can understand why they want to push it beyond PCs. But the mobile era is about low power devices, touch interfaces and open web standards – all areas where Flash falls short.
The avalanche of media outlets offering their content for Apple’s mobile devices demonstrates that Flash is no longer necessary to watch video or consume any kind of web content. And the 200,000 apps on Apple’s App Store proves that Flash isn’t necessary for tens of thousands of developers to create graphically rich applications, including games.
New open standards created in the mobile era, such as HTML5, will win on mobile devices (and PCs too). Perhaps Adobe should focus more on creating great HTML5 tools for the future, and less on criticizing Apple for leaving the past behind.
Steve Jobs
April, 2010
In Mobile Age, Sound Quality Steps Back
In Mobile Age, Sound Quality Steps Back
By JOSEPH PLAMBECK
Published: May 9, 2010
At the ripe age of 28, Jon Zimmer is sort of an old fogey. That is, he is obsessive about the sound quality of his music.
Joshua Bright for The New York Times
Joshua Bright for The New York Times
Readers' Comments
Share your thoughts.
A onetime audio engineer who now works as a consultant for Stereo Exchange, an upscale audio store in Manhattan, Mr. Zimmer lights up when talking about high fidelity, bit rates and $10,000 loudspeakers.
But iPods and compressed computer files — the most popular vehicles for audio today — are “sucking the life out of music,” he says.
The last decade has brought an explosion in dazzling technological advances — including enhancements in surround sound, high definition television and 3-D — that have transformed the fan’s experience. There are improvements in the quality of media everywhere — except in music.
In many ways, the quality of what people hear — how well the playback reflects the original sound— has taken a step back. To many expert ears, compressed music files produce a crackly, tinnier and thinner sound than music on CDs and certainly on vinyl. And to compete with other songs, tracks are engineered to be much louder as well.
In one way, the music business has been the victim of its own technological success: the ease of loading songs onto a computer or an iPod has meant that a generation of fans has happily traded fidelity for portability and convenience. This is the obstacle the industry faces in any effort to create higher-quality — and more expensive — ways of listening.
“If people are interested in getting a better sound, there are many ways to do it,” Mr. Zimmer said. “But many people don’t even know that they might be interested.”
Take Thomas Pinales, a 22-year-old from Spanish Harlem and a fan of some of today’s most popular artists, including Lady Gaga, Jay-Z and Lil Wayne. Mr. Pinales listens to his music stored on his Apple iPod through a pair of earbuds, and while he wouldn’t mind upgrading, he is not convinced that it would be worth the cost.
“My ears aren’t fine tuned,” he said. “I don’t know if I could really tell the difference.”
The change in sound quality is as much cultural as technological. For decades, starting around the 1950s, high-end stereos were a status symbol. A high-quality system was something to show off, much like a new flat-screen TV today.
But Michael Fremer, a professed audiophile who runs musicangle.com, which reviews albums, said that today, “a stereo has become an object of scorn.”
The marketplace reflects that change. From 2000 to 2009, Americans reduced their overall spending on home stereo components by more than a third, to roughly $960 million, according to the Consumer Electronics Association, a trade group. Spending on portable digital devices during that same period increased more than fiftyfold, to $5.4 billion.
“People used to sit and listen to music,” Mr. Fremer said, but the increased portability has altered the way people experience recorded music. “It was an activity. It is no longer consumed as an event that you pay attention to.”
Instead, music is often carried from place to place, played in the background while the consumer does something else — exercising, commuting or cooking dinner.
The songs themselves are usually saved on the digital devices in a compressed format, often as an AAC or MP3 file. That compression shrinks the size of the file, eliminating some of the sounds and range contained on a CD while allowing more songs to be saved on the device and reducing download times.
Even if music companies and retailers like the iTunes Store, which opened in April 2003, wanted to put an emphasis on sound quality, they faced technical limitations at the start, not to mention economic ones.
“It would have been very difficult for the iTunes Store to launch with high-quality files if it took an hour to download a single song,” said David Dorn, a senior vice president at Rhino Entertainment, a division of Warner Music that specializes in high-quality recordings.
The music industry has not failed to try. About 10 years ago, two new high-quality formats — DVD Audio and SACD, for Super Audio CD — entered the marketplace, promising sound superior even to that of a CD. But neither format gained traction. In 2003, 1.7 million DVD Audio and SACD titles were shipped, according to the Recording Industry Association of America. But by 2009, only 200,000 SACD and DVD Audio titles were shipped.
Last year, the iTunes Store upgraded the standard quality for a song to 256 kilobits per second from 128 kilobits per second, preserving more details and eliminating the worst crackles.
Some online music services are now marketing an even higher-quality sound as a selling point. Mog, a new streaming music service, announced in March an application for smartphones that would allow the service’s subscribers to save songs onto their phone. The music will be available on the phone as long as the subscriber pays the $10 monthly fee. Songs can be downloaded at up to 320 kilobytes per second.
Another company, HDtracks.com, started selling downloads last year that contain even more information than CDs at $2.49 a song. Right now, most of the available tracks are of classical or jazz music.
David Chesky, a founder of HDtracks and composer of jazz and classical music, said the site tried to put music on a pedestal.
“Musicians work their whole life trying to capture a tone, and we’re trying to take advantage of it,” Mr. Chesky said. “If you want to listen to a $3 million Stradivarius violin, you need to hear it in a hall that allows the instrument to sound like $3 million.”
Still, these remain niche interests so far, and they are complicated by changes in the recording process. With the rise of digital music, fans listen to fewer albums straight through. Instead, they move from one artist’s song to another’s. Pop artists and their labels, meanwhile, shudder at the prospect of having their song seem quieter than the previous song on a fan’s playlist.
So audio engineers, acting as foot soldiers in a so-called volume war, are often enlisted to increase the overall volume of a recording.
Randy Merrill, an engineer at MasterDisk, a New York City company that creates master recordings, said that to achieve an overall louder sound, engineers raise the softer volumes toward peak levels. On a quality stereo system, Mr. Merrill said, the reduced volume range can leave a track sounding distorted. “Modern recording has gone overboard on the volume,” he said.
In fact, among younger listeners, the lower-quality sound might actually be preferred. Jonathan Berger, a professor of music at Stanford, said he had conducted an informal study among his students and found that, over the roughly seven years of the study, an increasing number of them preferred the sound of files with less data over the high-fidelity recordings.
“I think our human ears are fickle. What’s considered good or bad sound changes over time,” Mr. Berger said. “Abnormality can become a feature.”