财富中文网 >> 商业

人工智能容易被骗,分不清海龟和来复枪

分享: [译文]

Researchers from MIT’s computer science and artificial intelligence laboratory have discovered how to trick Google’s (GOOG, +0.66%)software that automatically recognizes objects in images. They created an algorithm that subtly modified a photo of a turtle so that Google’s image-recognition software thought it was a rifle. What’s especially noteworthy is that when the MIT team created a 3D printout of the turtle, Google’s software still thought it was a weapon rather than a reptile.

The confusion highlights how criminals could eventually exploit image-detecting software, especially as it becomes more ubiquitous in everyday life. Technology companies and their clients will have to consider the problem as they increasingly rely on artificial intelligence to handle vital jobs.

For example, airport scanning equipment could one day be built with technology that automatically identifies weapons in passenger luggage. But criminals could try to fool the detectors by modifying dangerous items like bombs so they are undetectable.

All the changes the MIT researchers made to the turtle image were unrecognizable to the human eye, explained Anish Athalye, an MIT researcher and PHD candidate in computer science who co-led the experiment.

After the original turtle image test, the researchers reproduced the reptile as a physical object to see if the modified image would still trick Google’s computers. The researchers then took photos and video of the 3-D printed turtle, and fed that data into Google’s image-recognition software.

Sure enough, Google’s software thought the turtles were rifles.

MIT publicized an academic paper about the experiment last week. The authors are submitting the paper, which builds on previous studies testing artificial intelligence, for further review at an upcoming AI conference.

Computers designed to automatically spot objects in images are based on neural networks, software that loosely imitates how the human brain learns. If researchers feed enough images of cats into these neural networks, they learn to recognize patterns in those images so they can eventually spot felines in photos without human help.

But these neural networks can sometimes stumble if they are fed certain types of pictures with bad lighting and obstructed objects. The way these neural networks work is still somewhat mysterious, Athalye explained, and researchers still don’t know why they may or may not accurately recognize something.

The MIT team’s algorithm created what are known as adversarial examples, essentially computer-manipulated images that were crafted to fool software that recognize objects. While the turtle image may resemble a reptile to humans, the algorithm morphed it so that it shares unknown characteristics with an image of a rifle. The algorithm also took in account conditions like poor lighting or miscoloration that could have caused Google’s image-recognition software to misfire, Athalye said. The fact that Google’s software still mislabeled the turtle after it was 3D printed shows that the adversarial qualities embedded from the algorithm are still retained in the physical world.

Although the research paper focuses on Google’s AI software, Athalye said that similar image-recognition tools from Microsoft(MSFT, +0.46%) and the University of Oxford also stumbled. Most other image-recognition software from companies like Facebook (FB, -0.40%) and Amazon (AMZN, +0.86%) would also likely blunder, he speculates, because of their similarities.

In addition to airport scanners, home security systems that rely on deep learning to recognize certain images may also be vulnerable to being fooled, Athalye explained.

Consider cameras that are increasingly set up to only record when they notice movement. To avoid being tripped by innocuous activity like cars driving by, cameras could be trained to ignore automobiles. To take advantage, however, criminals could wear t-shirts that have been specially designed to fool computers into thinking they see trucks instead of people. If so, burglars could easily bypass the security system.

Of course, this is all speculation, Athalye concedes. But, considering the frequency of hacking, it’s something worth considering. Athalye said he wants to test his idea and eventually make “adversarial t-shirts” that have the ability to “mess up a security camera.”

Google and other companies like Facebook are aware that hackers are trying to figure out ways to spoof their systems. For years, Google has been studying the kind of threats that Athalye and his MIT team produced. A Google spokesperson declined to comment on the MIT project, but pointed to two recent Google research papersthat highlight the company’s work on combating the adversarial techniques.

“There are a lot of smart people working hard to make classifiers [like Google’s software] more robust,” Athalye said.

麻省理工学院(MIT)计算机科学和人工智能实验室的研究人员找到了办法来欺骗自动识别图片中物体的谷歌(Google)软件。他们创造了一个算法,略微改动了海龟的图片,就可以让谷歌的识图软件将它视作一把来复枪。特别值得一提的是,麻省理工学院的团队3D打印了这只海龟后,谷歌的软件依旧认为它是一把武器,而不是一只爬行动物。

这样的混淆,意味着罪犯最终也可能利用到识图软件的缺陷。随着这类软件越来越渗透到人们的日常生活之中,情况会更为凸显。由于科技公司和他们的客户日益依赖人工智能来处理重要工作,他们必须考虑这个问题。

例如,机场扫描设备可能有一天会采用识别技术,自动探测旅客行李中的武器。不过罪犯可能会试图改造炸弹等危险品,欺骗探测器而让它们无法被检测到。

麻省理工学院的研究者、计算机科学博士生、实验的共同领导者阿尼什•阿塔耶解释道,麻省理工学院的研究人员对海龟图像所做的一切改变,都是人眼无法识别的。

在起初的海龟图片测试后,研究人员把这只爬行动物重制成了一个物体,看看修改后的形象能否继续欺骗谷歌的计算机。随后,他们对3D打印的海龟进行了摄影和录像,并将数据输入谷歌的识图软件。

果然,谷歌的软件认为这些海龟就是来复枪。

麻省理工学院上周发表了一篇关于本实验的学术论文。这篇论文以之前几次测试人工智能的研究为基础,作者已经将其提交,供即将举办的人工智能会议作进一步审阅。

能够自动识别图中物体的计算机,都依赖于神经网络,这种软件会大致模仿人类大脑学习的方式。如果研究人员给神经网络提供了足够的猫类图片,它们就能识别这些图片的模式,最终在没有人类帮助的情况下认出图片中的猫类。

不过,如果这些神经网络学习的图片照明效果不好或是物体被遮挡,有时就会犯错。阿塔耶解释道,神经网络的工作方式仍然有些难以理解,研究人员还不清楚它们为什么可以或无法准确识别某物。

麻省理工学院团队的算法创造了所谓的对抗样本,它们本质上是计算机修改的图片,专用于迷惑识别物体的软件。阿塔耶表示,尽管海龟的图像在人类眼里是一只爬行动物,但算法改变了图片,让它与来复枪的图像共享了某些未知的特征。这种算法还会考虑照明效果不加或色彩变换的情况,从而会导致谷歌的软件识别失败。在3D打印之后,谷歌软件仍然识别错误,表明算法产生的对抗特性在物质世界依旧存在。

阿塔耶表示,尽管论文重点讨论了谷歌的人工智能软件,但微软(Microsoft)和牛津大学(University of Oxford)开发的类似识图软件也会出错。他推测,由Facebook和亚马逊(Amazon)等公司开发的其他大多数识图软件也很可能失误,因为它们的机制大体相同。

阿塔耶解释道,除了机场扫描仪之外,依赖深度学习技术识别特定图像的家庭安全系统也可能被欺骗。

想象一下,假如越来越多的摄像头只在注意到物体运动时才开始录像。那么为了避免被过路汽车之类的无害行为干扰,摄像头可能会接受训练,忽视那些汽车。而利用这一点,罪犯就可以穿着专门设计的T恤,让计算机误以为它们只是看到了卡车,而不是人。果真如此的话,窃贼就能轻易通过安全系统。

阿塔耶承认,这些当然都只是推测。不过考虑到黑客事件频繁出现,这样的情形值得深思。阿塔耶表示,他希望测试自己的想法,并最终制造出有能力“迷惑安全摄像头”的“对抗T恤”。

谷歌和Facebook等其他公司意识到,黑客正在试图欺骗他们的系统。多年来,谷歌都在研究阿塔耶和他的麻省理工学院团队制造的这类威胁。谷歌的一位发言人拒绝就麻省理工学院的项目发表评论,不过他指出,谷歌最近的两篇论文体现了公司在应对对抗技术上的工作。

阿塔耶表示:“有许多聪明人都在努力工作,让(类似谷歌软件这样的)分类器更加完善。” (财富中文网)

译者:严匡正

阅读全文
返回顶部
#jsonld#