- 注释版
- 纯净版
文章来源:
下载音频
On Tuesday, the White House released a chilling report on AI and the economy. It began by positing that “it is to be expected that machines will continue to reach and exceed human performance on more and more tasks,” and it warned of massive job losses.
Yet to counter this threat, the government makes a recommendation that may sound absurd: we have to increase investment in AI. The risk to productivity and the US’s competitive advantage is too high to do anything but double down on it.
This approach not only makes sense, but also is the only approach that makes sense. It’s easy — and justified — to worry about the millions of individual careers that something like self-driving cars and trucks will retool, but we also have chasms of need that machine learning could help fill. Our medical system is deeply flawed; intelligent agents could spread affordable, high-quality healthcare to more people in more places. Our education infrastructure is not adequately preparing students for the looming economic upheaval; here, too, AI systems could chip in where teachers are spread too thin. We might gain energy independence by developing much smarter infrastructure, as Google subsidiary DeepMind did for its parent company’s power usage. The opportunities are too great to ignore.
More important, we have to think beyond narrow classes of threatened jobs, because today’s AI leaders—at Google and elsewhere—are already laying the groundwork for an even more ambitious vision, the former pipe dream that is general artificial intelligence.
To visit the front lines of the great AI takeover is to observe machine learning systems routinely drubbing humans in narrow, circumscribed domains. This year, many of the most visible contestants in AI’s face-off with humanity have emerged from Google. In March, the world’s top Go player weathered a humbling defeat against DeepMind’s AlphaGo. Researchers at DeepMind also produced a system that can lip-read videos with an accuracy that leaves humans in the dust. A few weeks ago, Google computer scientists working with medical researchers reported an algorithm that can detect diabetic retinopathy in images of the eye as well as an ophthalmologist can. It’s an early step toward a goal many companies are now chasing: to assist doctors by automating the analysis of medical scans.
Also this fall, Microsoft unveiled a system that can transcribe human speech with greater accuracy than professional stenographers. Speech recognition is the basis of systems like Cortana, Alexa, and Siri, and matching human performance in this task has been a goal for decades. For Microsoft chief speech scientist XD Huang, “It’s personally almost like a dream come true after 30 years.”
But AI’s 2016 victories over humans are just the beginning. Emerging research suggests we will soon move from these slim slivers of intelligence to something richer and more complex. Though a true general intelligence is at least decades away, society will still see massive change as these systems acquire an ever-widening circle of mastery. That’s why the White House (well, at least while Obama’s still in office) isn’t shrinking from it. We are in the midst of developing a powerful force that will transform everything we do.
To ignore this trend — to not plunge headlong into understanding it, shaping it, monitoring it — might well be the biggest mistake a country could make.
Training one system to do many things is exactly what it takes to develop a general intelligence, and juicing up that process is now a core focus of AI boosters. Earlier this month OpenAI, the research consortium dreamed up by Elon Musk and Sam Altman, unveiled Universe, an environment for training systems that are not just accomplished at a single task, but that can hop around and become adept at various activities.
As cofounder Sustkever puts it, “If you try to look forward and see what it is exactly we mean by “intelligence,” it definitely involves not just solving one problem, but a large number of problems. But what does it mean for a general agent to be good, to be intelligent? These are not completely obvious questions.”
So he and his team designed Universe as a way to help others measure the general problem-solving abilities of AI agents. It includes about a thousand Atari games, Flash games, and browser tasks. If you were to enter whatever AI you’re building into the training ring that is Universe, it would be equipped with the same tools a human uses to manipulate a computer: a screen on which to observe the action, and a virtual keyboard and mouse.
The intent is for an AI to learn how to navigate one Universe environment, such as Wing Commander III, then apply that experience to quickly get up to speed in the next environment, which could be another game, such as World of Goo, or something as different as Wolfram Mathematica. A successful AI agent would display some transfer learning, with a degree of agility and reasoning.
This approach is not without precedent. In 2013, DeepMind revealed a single deep learning-based algorithm that discovered, on its own, how to play six out of seven Atari games on which it was tested. For three of those games — Breakout, Enduro, and Pong — it outperformed a human expert player. Universe is a sort of scaled-up version of that DeepMind success story.
As Universe grows, AI trainees can start learning innumerable useful computer-related skills. After all, it is essentially a portal into the world of any contemporary desk jockey. The diversity of Universe environments might even allow AI agents to pick up some broad world knowledge that otherwise would be tough to collect.
It’s a bit of a leap from a Flash-and-Atari champion to an agent that improves the quality of healthcare, but that’s because our intelligent systems are still in kindergarten. For many years, AI hadn’t made it even this far. Now it is on the path to first grade, middle school, and eventually, advanced degrees.
Yes, the outcome is uncertain. Yes, it’s totally scary. But we have a choice now. We can try to shut down this murky future that we can neither fully control nor predict, and run the risk that the technology seeps out unbidden, potentially triggering massive displacement. Or we can actively try to guide it to the paths of greatest social gain, and encourage the future we want to see.
I’m with the White House on this one. A deep learning-powered world is coming, and we might as well rush right into it.
单词
- chilling 令人恐惧的
- posit 假定; 假设
- counter 柜台,抵消,抗衡
- retool 重新装备; 更换(工厂用机器或公司用设备)
- chasm 鸿沟; 巨大分歧
- flaw 缺点 瑕疵
- loom 隐约显现 可能发生
- takeover 接管,收购
- circumscribe 限制
- stenographer 速记员
- plunge 纵身跳向; 猛冲向
- booster 助推器 支持者
- adept 娴熟的
- put 表达
- agility 敏捷;灵活
- reveal 透露; 显示
重点句子
1) it is to be expected that machines will continue to reach and exceed human performance on more and more tasks
(可以预见,未来的机器会像现在一样,在越来越多的事情上,和人类的表现不相上下,甚至超过人类。)
2) The risk to productivity and the us competitive advantage is too high to do anything but double down on it. (生产力和美国竞争优势面临的风险非常大,因此我们不得不加倍投入研究,除此之外别无选择)
3) To visit front lines of the great AI take over is to observe, machine learning systems routinely drubbing humans in narrow circumscribe, domains.
(要想了解人工智能最前沿的发展,就去观察机器学习系统是如何在特定领域一次次完爆人类的)
4) Researchers at DeepMind also produced a system that can lip-read videos with an accuracy that leaves humans in the dust
(Deepmind公司中的研究人员还做了一套能够读唇语的系统,这套系统的精准度甩了人类几十条街)
5) Training one system to do many things is exactly what it takes to develop a general intelligence, and juicing up that process is now a core focus of AI general intelligence.
(训练一个系统完成很多任务正是发展一般智能所需的,而积极促进这个过程则是很多AI拥踅的重中之重)
6) A successful AI agent would display some transfer learning, with a degree of agility and reasoning
(成功的人工智能终端应该可以展现出一定的学习迁移能力,一定程度的敏捷性和一定的推理能力)
7) It’s a bit of a leap from a Flash-and-Atari champion to an agent that improves the quality of healthcare, but that’s because our intelligent systems are still in kindergarten
(对于人工智能来说,从游戏冠军摇身一变成为提高医保质量的使者,还有很大的跨度,不过那是因为我们的智能系统还在蹒跚学步)
文章来源:
下载音频
On Tuesday, the White House released a chilling report on AI and the economy. It began by positing that “it is to be expected that machines will continue to reach and exceed human performance on more and more tasks,” and it warned of massive job losses.
Yet to counter this threat, the government makes a recommendation that may sound absurd: we have to increase investment in AI. The risk to productivity and the US’s competitive advantage is too high to do anything but double down on it.
This approach not only makes sense, but also is the only approach that makes sense. It’s easy — and justified — to worry about the millions of individual careers that something like self-driving cars and trucks will retool, but we also have chasms of need that machine learning could help fill. Our medical system is deeply flawed; intelligent agents could spread affordable, high-quality healthcare to more people in more places. Our education infrastructure is not adequately preparing students for the looming economic upheaval; here, too, AI systems could chip in where teachers are spread too thin. We might gain energy independence by developing much smarter infrastructure, as Google subsidiary DeepMind did for its parent company’s power usage. The opportunities are too great to ignore.
More important, we have to think beyond narrow classes of threatened jobs, because today’s AI leaders—at Google and elsewhere—are already laying the groundwork for an even more ambitious vision, the former pipe dream that is general artificial intelligence.
To visit the front lines of the great AI takeover is to observe machine learning systems routinely drubbing humans in narrow, circumscribed domains. This year, many of the most visible contestants in AI’s face-off with humanity have emerged from Google. In March, the world’s top Go player weathered a humbling defeat against DeepMind’s AlphaGo. Researchers at DeepMind also produced a system that can lip-read videos with an accuracy that leaves humans in the dust. A few weeks ago, Google computer scientists working with medical researchers reported an algorithm that can detect diabetic retinopathy in images of the eye as well as an ophthalmologist can. It’s an early step toward a goal many companies are now chasing: to assist doctors by automating the analysis of medical scans.
Also this fall, Microsoft unveiled a system that can transcribe human speech with greater accuracy than professional stenographers. Speech recognition is the basis of systems like Cortana, Alexa, and Siri, and matching human performance in this task has been a goal for decades. For Microsoft chief speech scientist XD Huang, “It’s personally almost like a dream come true after 30 years.”
But AI’s 2016 victories over humans are just the beginning. Emerging research suggests we will soon move from these slim slivers of intelligence to something richer and more complex. Though a true general intelligence is at least decades away, society will still see massive change as these systems acquire an ever-widening circle of mastery. That’s why the White House (well, at least while Obama’s still in office) isn’t shrinking from it. We are in the midst of developing a powerful force that will transform everything we do.
To ignore this trend — to not plunge headlong into understanding it, shaping it, monitoring it — might well be the biggest mistake a country could make.
Training one system to do many things is exactly what it takes to develop a general intelligence, and juicing up that process is now a core focus of AI boosters. Earlier this month OpenAI, the research consortium dreamed up by Elon Musk and Sam Altman, unveiled Universe, an environment for training systems that are not just accomplished at a single task, but that can hop around and become adept at various activities.
As cofounder Sustkever puts it, “If you try to look forward and see what it is exactly we mean by “intelligence,” it definitely involves not just solving one problem, but a large number of problems. But what does it mean for a general agent to be good, to be intelligent? These are not completely obvious questions.”
So he and his team designed Universe as a way to help others measure the general problem-solving abilities of AI agents. It includes about a thousand Atari games, Flash games, and browser tasks. If you were to enter whatever AI you’re building into the training ring that is Universe, it would be equipped with the same tools a human uses to manipulate a computer: a screen on which to observe the action, and a virtual keyboard and mouse.
The intent is for an AI to learn how to navigate one Universe environment, such as Wing Commander III, then apply that experience to quickly get up to speed in the next environment, which could be another game, such as World of Goo, or something as different as Wolfram Mathematica. A successful AI agent would display some transfer learning, with a degree of agility and reasoning.
This approach is not without precedent. In 2013, DeepMind revealed a single deep learning-based algorithm that discovered, on its own, how to play six out of seven Atari games on which it was tested. For three of those games — Breakout, Enduro, and Pong — it outperformed a human expert player. Universe is a sort of scaled-up version of that DeepMind success story.
As Universe grows, AI trainees can start learning innumerable useful computer-related skills. After all, it is essentially a portal into the world of any contemporary desk jockey. The diversity of Universe environments might even allow AI agents to pick up some broad world knowledge that otherwise would be tough to collect.
It’s a bit of a leap from a Flash-and-Atari champion to an agent that improves the quality of healthcare, but that’s because our intelligent systems are still in kindergarten. For many years, AI hadn’t made it even this far. Now it is on the path to first grade, middle school, and eventually, advanced degrees.
Yes, the outcome is uncertain. Yes, it’s totally scary. But we have a choice now. We can try to shut down this murky future that we can neither fully control nor predict, and run the risk that the technology seeps out unbidden, potentially triggering massive displacement. Or we can actively try to guide it to the paths of greatest social gain, and encourage the future we want to see.
I’m with the White House on this one. A deep learning-powered world is coming, and we might as well rush right into it.