财富中文网 >> 商业

深度伪造以假乱真,人眼已经无法识别,该怎么破?

分享: [译文]

图片来源:Zuckerberg: Courtesy of Faceboook; Obama: Neilson Barnard—Getty Images; Trump: Saul Loeb—AFP/Getty Images; Pelosi: Chip Somodevilla—Getty Images; Gadot & Johansson: Mike Coppola—Getty Images; Wireframes: Lidiia Moor—Getty images

Like a zombie horde, they keep coming. First, there were the pixelated likenesses of actresses Gal Gadot and Scarlett Johansson brushstroked into dodgy user-generated adult films. Then a disembodied digital Barack Obama and Donald Trump appeared in clips they never agreed to, saying things the real Obama and Trump never said. And in June, a machine-learning-generated version of Facebook CEO Mark Zuckerberg making scary comments about privacy went viral.

Welcome to the age of deepfakes, an emerging threat powered by artificial intelligence that puts words in the mouths of people in video or audio clips, conjures convincing headshots from a sea of selfies, and even puts individuals in places they’ve never been, interacting with people they’ve never met. Before long, it’s feared, the ranks of deepfake deceptions will include politicians behaving badly, news anchors delivering fallacious reports, and impostor executives trying to bluff their way past employees so they can commit fraud.

So far, women have been the biggest victims of deepfakes. In late June, the app Deepnudes shut down amid controversy after journalists disclosed that users could feed the app ordinary photos of women and have it spit out naked images of them.

There’s concern the fallout from the technology will go beyond the creepy, especially if it falls into the hands of rogue actors looking to disrupt elections and tank the shares of public companies. The tension is boiling over. Lawmakers want to ban deepfakes. Big Tech believes its engineers will develop a fix. Meanwhile, the researchers, academics, and digital rights activists on the front lines bemoan that they’re ill equipped to fight this battle.

Sam Gregory, program director at the New York City–based human rights organization Witness, points out that it’s far easier to create a deepfake than it is to spot one. Soon, you won’t even need to be a techie to make a deepfake.

Witness has been training media companies and activists in how to identify A.I.-generated “synthetic media,” such as deepfakes and facial reenactments—the recording and transferring of facial expressions from one person to another—that could undermine trust in their work. He and others have begun to call on tech companies to do more to police these fabrications. “As companies release products that enable creation, they should release products that enable detection as well,” says Gregory.

Software maker Adobe Systems has found itself on both sides of this debate. In June, computer scientists at Adobe Research demonstrated a powerful text-to-speech machine-learning algorithm that can literally put words in the mouth of a person on film. A company spokesperson notes that Adobe researchers are also working to help unmask fakes. For example, Adobe recently released research that could help detect images manipulated by Photo¬shop, its popular image-editing software. But as researchers and digital rights activists note, the open-source community, made up of amateur and independent programmers, is far more organized around making deepfakes persuasive and thus harder to spot.

For now, bad actors have the advantage.

This is one reason that lawmakers are stepping into the fray. The House Intelligence Committee convened a hearing in June about the national security challenges of artificial intelligence, manipulated media, and deepfakes. The same day, Rep. Yvette Clarke (D-N.Y.) introduced the DEEPFAKES Accountability Act, the first attempt by Congress to criminalize synthetic media used to deceive, defraud, or destabilize the public. State lawmakers in Virginia, Texas, and New York, meanwhile, have introduced or enacted their own legislation in what’s expected to be a torrent of laws aimed at outmaneuvering the fakes.

Jack Clark, policy director at OpenAI, an A.I. think tank, testified on Capitol Hill in June about the deepfakes problem. He tells Fortune that it’s time “industry, academia, and government worked together” to find a solution. The public and private sectors, Clark notes, have joined forces in the past on developing standards for cellular networks and for regulating public utilities. “I expect A.I. is important enough we’ll need similar things here,” he says.

In an effort to avoid such government intervention, tech companies are trying to show that they can handle the problem without clamping down too hard on free speech. YouTube has removed a number of deepfakes from its service after users flagged them. And recently, Facebook’s Zuckerberg said that he’s considering a new policy for policing deepfakes on his site, enforced by a mix of human moderators and automation.

The underlying technology behind most deepfakes and A.I.-powered synthetic media is the generative adversarial network, or GAN, invented in 2014 by the Montreal-based Ph.D. student Ian Goodfellow, who later worked at Google before joining Apple this year.

Until his invention, machine-learning algorithms had been relatively good at recognizing images from vast quantities of training data—but that’s about all. With the help of newer technology, like more powerful computer chips, GANs have become a game changer. They enable algorithms to not just classify but also create pictures. Show a GAN an image of a person standing in profile, and it can produce entirely manufactured images of that person—from the front or the back.

Researchers immediately heralded the GAN as a way for computers to fill in the gaps in our understanding of everything around us, to map, say, parts of distant galaxies that telescopes can’t penetrate. Other programmers saw it as a way to make super-convincing celebrity porn videos.

In late 2017, a Reddit user named “Deepfakes” did just that, uploading to the site adult videos featuring the uncanny likenesses of famous Hollywood actresses. The deepfake phenomenon exploded from there.

Soon after, Giorgio Patrini, a machine-learning Ph.D. who became fascinated—and then concerned—with how GAN models were being exploited, left the research lab and cofounded Deeptrace Labs, a Dutch startup that says it’s building “the antivirus for deepfakes.” Clients include media companies that want to give reporters tools to spot manipulations of their work or to vet the authenticity of user-¬generated video clips. Patrini says that in recent months, corporate brand-reputation managers have contacted his firm, as have network security specialists.

“There’s particular concern about deepfakes and the potential for it to be used in fraud and social engineering attempts,” says Patrini.

Malwarebytes Labs of Santa Clara, Calif., recently warned of something similar, saying in a June report on A.I.-powered threats that “deepfakes could be used in incredibly convincing spear-phishing attacks that users would be hard-pressed to identify as false.” The report continues, “Imagine getting a video call from your boss telling you she needs you to wire cash to an account for a business trip that the company will later reimburse.”

In the world of deepfakes, you don’t need to be famous to be cast in a leading role.

This article originally appeared in the August 2019 issue of Fortune.

它们就像是一大群僵尸一样,层出不穷。一开始,人们看到的是像素拼凑版的女演员盖尔·加朵和斯嘉丽·约翰逊,被狡诈的用户植入了自己制作的成人电影。随后,表情空洞的贝拉克·奥巴马和唐纳德·特朗普出现在从未经过自己同意的短片中,说了一些奥巴马和特朗普自己从未说过的话。在6月,机器学习生成的Facebook的首席执行官马克·扎克伯格就隐私问题做出了一些恐怖的评论,很快引爆网络。

欢迎来到“深度伪造”时代,这是一种基于人工智能的新威胁,它能够修改视频或音频剪辑中的语音,从海量的自拍中选出令人信服的特写,甚至可以把人们放到他们从未去过的地方,让他们与从未见过的人见面。很快,人们便开始担心,那些加入深度伪造欺骗行列的人包括行为不端的政客、提供恶意报道的新闻主持人,以及那些为了开展诈骗而吓唬雇员的骗子高管。

到目前为止,女性一直是深度伪造最大的受害者。在6月底,应用程序Deepnudes在一片争议声中下架,因为有记者披露用户可以向应用程序上传女性照片,然后获得这些女性的裸体图片。

有人担心,该技术的不良影响可能远不止诡异那么简单,尤其对于那些寻求破坏大选或摧毁上市公司股票的无赖更是如此。紧张气氛一触即发。立法者希望禁止深度伪造技术。大型科技公司认为自家工程师会拿出补救措施。与此同时,位于一线的研究人员、学术界和数字权力活动家则叹息道,他们没有相应的武器来应对这场战争。

位于纽约市的人权组织Witness的项目总监山姆·格里高利指出,识别深度伪造要比制作深度伪造难的多。不久之后,人们无需掌握多少技术便可以制作深度伪造作品。

Witness一直在培训媒体公司和这一领域里的积极分子如何识别人工智能生成的“合成媒体”,例如可能会破坏其工作信任的深度伪造和面部再扮演(也就是录制某个人的面部表情,然后将其移接至另一个人)。他和其他人已经开始呼吁科技公司采取更多举措,管理这些伪造内容。格里高利说道:“公司在发布帮助创作的产品的同时也应该发布能够甄别这类作品的产品。”

软件制作商Adobe Systems发现自己处于左右为难的境地。6月,供职于Adobe Research的计算机科学家展示了一个强大的文字语音机器学习算法,能够逐字逐句地替换电影角色的对白。公司的一位发言人称,Adobe研究人员也在努力帮助打假。例如,Adobe最近发布了一项研究,能够帮助检测通过其热门修图软件Photoshop修改的图片。但研究人员和数字权力积极人士称,由业余人士和独立编程人员组建的开源社区在从事深度伪造方面的组织性要高得多,因此深度伪造作品也就更逼真,也更难识别。

到目前为止,不良分子这一边占据着优势。

这也是立法人员参与这场争论的原因。美国众议院情报委员会在6月就人工智能、受操纵媒体和深度伪造所带来的国家安全挑战举行了一次听证会。就在同一天,伊薇特·克拉克(纽约州民主党)提出了深度伪造责任法案,这是美国国会首次尝试对欺骗、诈骗或破坏公共稳定用途的合成媒体进行定罪。弗吉尼亚州、得克萨斯州和纽约州的州立法人员同时还引入或颁布了其自身的立法,我们预计将迎来一场法律洪流,目的就是为了打击伪造内容。

人工智能智库OpenAI的政策总监杰克·克拉克在6月曾经就深度伪造问题前往国会山作证。他向《财富》杂志透露,现在到了“行业、学术界和政府共同努力”寻找解决方案的时候了。克拉克提到,公共和私营领域在过去曾经联合开发了蜂窝网络,并规范了公用设施。他说:“我觉得人工智能也是一种非常重要的技术,我们需要采取类似的举措。”

为了避免出现此类政府干预,科技公司正在努力向外界展示,它们可以在无需过分强行压制自由言论的情况下,处理好这个问题。YouTube已经在用户举报之后从网站上移除了数个深度伪造视频。最近,Facebook的扎克伯格表示,他正在考虑针对识别自家网站上的深度伪造内容推出一项新政策,将由多名人工版主和自动化技术共同执行。

大多数深度伪造以及基于人工智能的合成媒体内容所仰仗的基础性技术都是生成式对抗网络(又称GAN),于2014年由位于蒙特利尔的博士生伊万·古德菲洛发明,后来他在谷歌工作了一段时间,然后于今年加入了苹果公司。

在他的发明出现之前,机器学习算法一直十分擅长于在海量的培训数据中识别图片,但也就是仅此而已。借助新面世的技术,例如更加强大的计算机芯片,GAN成为了游戏颠覆者。它不仅能够让算法进行分类操作,同时还可以创建图片。只要向GAN展示一张站立的侧面人像图片,它就可以对整个人像进行重构,既可以是正面也可以是背影。

GAN立即引发了研究人员的关注,他们认为GAN可以让计算机弥补我们对身边事物理解的空白,例如,绘制望远镜无法探测到的遥远星系。其他编程人员则将其看作是制作超逼真名人黄色视频的工具。

在2017年年底,Reddit上的一个名为“Deepfakes”的用户制作了这类视频,并将其上传至成人视频网站,画面中包括与知名好莱坞女演员长相相似的角色。深度伪造现象因此而一夜成名。

不久之后,机器学习领域博士希奥尔希奥·帕特里尼对GAN模型的利用方式感到痴迷,随后又表示了其担忧。他离开了实验室,并共同创建了Deeptrace Labs。这是一家荷兰初创企业,自称正在打造“深度伪造抗体”。客户包括那些希望为记者们提供鉴别工具的媒体公司,这样他们就可以发现其作品中经处理的内容,或验证用户制作视频短片的真实性。帕特里尼说,最近几个月,联系他们公司的不仅有企业品牌声誉经理,还有网络安全专家。

帕特里尼说:“人们对于深度伪造及其在诈骗和社会工程方面的用途尤为担忧。”

位于加州圣克拉拉的Malwarebytes Labs最近也发出了类似的警告,它在6月有关人工智能威胁的一篇报道中指出,“深度伪造可能被用于极度逼真的鱼叉式网络钓鱼攻击,而用户则很难去识别真伪。”报告还提到,“试想一下,你接到老板的视频电话,她告诉你需要把钱打到某个账号,用于出公差,随后公司会补偿给你。”

在深度伪造领域里,人们无需任何知名度便可以扮演主要角色。(财富中文网)

本文最初刊登于《财富》杂志2019年8月刊。

译者:冯丰

审校:夏林

阅读全文

相关阅读:

  1. 量子计算机恐将威胁网络安全,必须立刻行动起来
  2. 人工智能不是骗局,只是过度炒作
  3. 这家私募巨头携巨资进入网络安全领域
返回顶部
#jsonld#