财富中文网 >> 商业

调动数千人和先进的AI,Facebook的“整风运动”能管理好用户隐私问题吗?

分享: [译文]

For 20 minutes on the morning of May 1, Facebook users saw a curious query at the end of every update on their feeds. “Does this post contain hate speech?” they were asked, in small font next to “yes” and “no” buttons. (If they clicked yes, a pop-up box of follow-up prompts emerged; if no, the question disappeared.) Users of the social network have long been able to report disturbing posts, but this in-your-face approach was unsettling. Even more perplexing: The question was appended to all posts, including photos of fuzzy kittens and foodie breakfast check-ins.

It didn’t take long for word—and snark—to spread around the web. “So glad Facebook has finally given me the ability to report every single pro–New York Mets post as ‘hate speech,’ ” quipped one Twitter user. Adding to the embarrassment, May 1 was opening day for F8, the company’s annual developer conference—and a cheerful “coming soon” status update from CEO Mark Zuckerberg himself was among those festooned with the query. “Even on a post from Zuck, it asked, ‘Is this hateful?’ ” says Guy Rosen, VP of product for the social media giant’s safety and security team, who sat down with Fortune later that same day.

As it turns out, the hate-speech feature was a bug—an “uncooked test,” in Rosen’s words, released prematurely. But though he was, broadly speaking, responsible for the blunder, he wasn’t apologetic about the technology. At some point soon, Rosen explained, feedback from such queries (applied smartly and sparingly) could be added to Facebook’s growing stockpile of weapons in its fight against harassment and other offensive or illicit activity that has proliferated on the platform. Those reports, in turn, would help train artificial intelligence systems to distinguish between innocuous fluff and posts that infringe on Facebook’s code of conduct.

In hindsight, it was ironically appropriate that the Zuckerberg post that was tagged that day read in part, “I’m going to share more about the work we’re doing to keep people safe.”

That Facebook needs cleaning up is something only a free-speech absolutist would dispute these days. The platform, with its 2.2 billion users, has an unmatched global reach. And its spreading swamp of harmful content, from election-manipulating “fake news,” to racist and terrorist propaganda, to the streaming of assaults and suicides via Facebook Live, has prompted an unprecedented outcry, with critics in the U.S. and abroad demanding that Facebook police itself better—or be policed by regulators.

“The technology needs large amounts of training data,” gleaned from users’ posts, to spot “meaningful patterns.”

——Guy Rosen: VP of Product

今年5月1日上午,Facebook用户突然发现,信息流里每条新消息的末尾都多了个问题:该帖是否包含恶意言论?后面还有小号字体按钮,写着“是”和“否”。该情况持续了20分钟。(如果点击“是”,就会出现一系列弹框,如果点击“否”,问题就会消失。)Facebook的用户一直可以主动向网站举报不喜欢的帖子,但直接将问题摆到用户面前让人有点不安。用户甚至更困惑:因为所有内容后面都提出了问题,连毛茸茸的猫咪照片和美食爱好者发早餐的图片后面也有。

没多久,Facebook的新措施就传遍了互联网。一位推特用户不无讥讽地评论:“很高兴,Facebook终于让我能举报所有支持(美国职业棒球大联盟球队)纽约大都会的帖子‘恶意言论’了。”更尴尬的是,5月1日是Facebook年度开发者大会F8的开幕日。而首席执行官马克·扎克伯格期待F8“即将到来”的状态更新后面也出现提问。“连马克发布的内容后面也写着,‘是否包含恶意言论?’” 当天晚些时候,负责Facebook安全团队的副总裁盖·罗森接受《财富》采访时表示。

后来人们才知道,举报恶意言论的功能是一个漏洞。用罗森的话说,这是一次“不成熟的测试”,发布得太早了。总之,他应该为失误负责,但他并没有为应用该技术道歉。罗森解释说,不久以后,利用(巧妙且谨慎地)问询收到反馈,Facebook打击平台上不断蔓延的骚扰行为,其他无礼或非法活动时也更得力。反馈报告还能培训人工智能系统,辨别哪些言论并无恶意,哪些则违反了Facebook社区规范。

扎克伯格5月1日发布的消息里提到:“我将分享更多有关保障用户安全工作的信息。”事后来看,该内容被问有无恶意真有些讽刺。

现在只有信奉绝对言论自由的人才会质疑Facebook要清理不良言论。Facebook拥有22亿用户,涉及人群之广在全球无可匹敌。从操纵选举的“假新闻”到种族主义者和恐怖主义者利用其宣传,再到用Facebook Live视频直播刺杀和自杀画面,平台上确实存在各种有害内容,因此面临的抗议也前所未有。美国内外的批评者要求Facebook加强自我审查,或由监管机构出手。

为了找到“有意义的规律”,“这项技术需要通过大量用户帖子提炼样本数据。”

——盖·罗森:产品副总裁

罗森负责开发帮Facebook标示仇恨言论、违禁图片和其他不当行为的技术。JESSICA CHOU FOR FORTUNE

The social media giant recently disclosed the mind-boggling quantity of some of these transgressions. In mid-May, Facebook reported that in the first quarter of 2018 alone, it had discovered 837 million instances of spam, false advertising, fraud, malicious links, or promotion of counterfeit goods, along with 583 million fake accounts (all of which it says it disabled). It also found 21 million examples of “adult nudity and sexual activity violations,” 3.4 million of graphic violence, 2.5 million of hate speech, and 1.9 million of terrorist propaganda related to ISIS, al Qaeda, or their affiliates. Facebook’s mission is to bring the world closer together—but this is not the closeness it had in mind.

Part of the fault lies in Facebook’s business model, explains Sarah Roberts, an assistant professor at UCLA’s Graduate School of Education and Information Studies who researches social media: “The only way to encourage user engagement without going broke is to ask people to contribute content for free. But when you ask unknown parties anywhere in the world to express themselves any way they see fit, you will get the full gamut of human expression.”

Granted, it wasn’t such expression that most recently got Facebook in trouble: It was the Cambridge Analytica scandal, in which it emerged that data of some 87 million Facebook users had been obtained by a third-party developer—and used by Donald Trump and other candidates in 2016 to target voters. The privacy breach earned Zuckerberg two grueling days of grilling from Congress. Some legislators, though, were just as eager to press him on fake news and opioid-sales scams. In front of the nation, Zuckerberg conceded, “I agree we are responsible for the content [on Facebook]”—a remarkable admission from a company that for years insisted it was just providing a platform and thus absolved from blame for what gets said, done, or sold on its network.

By the end of 2018, Facebook plans to double, to nearly 20,000, the number of moderators and other “safety and security” personnel whose job it is to catch and remove inappropriate content. And because even 20,000 people can’t possibly patrol all of the billions of videos, chats, and other posts on the massive network, Facebook is simultaneously developing artificial intelligence technologies to help do so.

Over several weeks this spring, Fortune spent time at Facebook’s Menlo Park, Calif., headquarters to see what that policing might look like. An irony quickly became apparent: For these people and machines to be more effective at their jobs, they will need to rely on increasingly invasive tactics. More humans will need to pore through more of your photos, comments, and updates. To improve their pattern recognition, A.I. tools will need to do the same. (As for your “private” messages, Facebook A.I. already scans those.) And to put particularly high-risk posts in context, humans and machines alike could dig through even more of a user’s history.

Such surveillance “is a bit of a double-edged sword,” says Roberts. It also doesn’t come cheap. Facebook has said that it expects its total expenses in 2018 to grow 50% to 60%, compared with 2017, partly owing to spending on human and A.I. monitoring. (The company doesn’t separately break out its monitoring expenses.)

It’s an outlay Facebook can certainly afford. The company, No. 76 on this year’s Fortune 500, has so far absorbed the recent controversies without taking a serious financial hit. Its first-quarter revenue jumped an impressive 49% year over year, to $12 billion, and its stock, which lost $134 billion in market value after the Cambridge Analytica news broke, now trades near pre-scandal levels. “[Monitoring] would have to be a pretty intensive investment to materially impact the margins of the business,” says John Blackledge, a senior research analyst with Cowen and a longtime follower of the company.

Still, it’s not an investment on which Facebook can skimp. Facebook can afford to lose some squeamish users, but if it drives away advertisers, who account for 98% of its revenue, it’s in big trouble. For now, Facebook says it hasn’t seen tangible disruption to the business, but some brands were voicing concerns over the presence of fake news and criminal activity well before the Cambridge Analytica exposé. And while beefed-up policing could create a safer user experience, that safety could come at an additional price. At a time when Facebook’s handling of private data and the sheer amount of information it holds have come under scrutiny, will consumers trust it to sift through even more of their posts? “We take user privacy very seriously and build our systems with privacy in mind,” Rosen asserts. But the more users know about Facebook’s cleanup efforts, the bigger the mess that might ensue.

On a spring morning in Menlo Park, more than 30 senior staffers gathered in Facebook’s Building 23 to discuss several meaty topics, including how the network should categorize hateful language. Such conversations happen every two weeks, when the company’s Content Standards Forum convenes to discuss possible updates to its rules on what kind of behavior crosses the line between obnoxious and unacceptable.

Five years ago, Facebook assigned the task of running the forum to Monika Bickert, a former assistant U.S. attorney who first joined the company as counsel for its security team. Bickert works closely with Rosen to ensure that Facebook develops tools to help implement the policies her team sets. The duo, in turn, collaborates with Justin Osofsky, whose duties as VP of global operations include overseeing the company’s growing ranks of content reviewers. If Facebook’s worst posts resemble dumpster fires, these three lead the bucket brigade.

Tall, redheaded, and athletic, Bickert exudes a bluntness that’s rare at the social network. She speaks openly about uncomfortable topics like the presence of sex offenders and beheading videos on the platform. And unlike some of the more idealistic executives, she doesn’t seem stunned by the fact that not everyone uses Facebook for good. “The abusive behaviors that we’re addressing are the same ones you would see off-line—certainly as a prosecutor,” she says.

“The abusive behaviors that we’re addressing are the same ones you would see off-line.”

——Monica Bickert: Head of Global Policy Management

Facebook最近披露,已查处大量不当言论。Facebook于5月中旬公布,仅今年第一季度,就发现了8.37亿条垃圾信息、虚假广告、诈骗和恶意链接或者促销假冒商品的链接,以及5.83亿个虚假账户(Facebook称已全部查封)。Facebook还发现2100万例“成人裸体和性侵犯活动”、3400万张暴力图像、2500万条恶意言论,1900万条跟伊斯兰国(ISIS)、基地组织或其分支机构有关的恐怖分子宣传言论。Facebook的使命是加强人与人之间的联系,但这些联系并非Facebook的初衷。

美国加州大学洛杉矶分校的教育与信息研究研究生院助理教授莎拉·罗伯茨认为,出现问题部分原因Facebook的商业模式。研究社交媒体的罗伯茨解释说:“既要鼓励用户参与又要维持经营,唯一的方法就是让人们免费贡献内容。可如果让世界各地的人们都用自己认为合适的任何方式表达,结果必然是五花八门。”

事实上,最近让Facebook陷入困境的并不是表达多样,而是剑桥分析丑闻。该丑闻中,剑桥分析作为第三方开发者收集到约8700万Facebook用户数据,2016年美国大选期间美国现任总统唐纳德·特朗普和其他竞选者使用相关数据针对选民做定向宣传。由于该泄密事件,扎克伯格只得忍受美国国会整整两天的盘问。一些议员不断用假新闻和出售阿片类药物骗局向他施压。扎克伯格向全国民众承认:“我赞成,我们应该对(Facebook上)的内容负责。”对Facebook来说,这一表态意义重大。因为多年来其一直坚称仅提供平台,对用户在其网络中的言论、行为和推销不承担责任。

Facebook计划,到今年年底,将查明和删除不当内容的内容审核员和其他安全事务方面员工翻倍,总人数达到近2万人。但即使有2万人也不可能浏览Facebook庞大网络中数十亿视频、聊天和其他内容。所以,Facebook在同步开发人工智能技术协助。

今年春天的几周里,《财富》数次走访Facebook在加州城市门洛帕克的总部,想了解Facebook内容管理政策如何推行。很快就有一个颇为讽刺的发现:为了更有效地开展工作,内容审核人员和人工智能对用户信息挖掘要更深入。人工审核员要浏览更多的用户照片、评论和状态更新。为了更准确识别出不当行为,人工智能软件也要分析更多用户数据。(Facebook的人工智能已开始审核用户私下发送的消息。)为了准确理解高风险内容,Facebook工作人员和机器可能要挖掘用户更多历史记录。

罗伯茨说,如此审查力度“有点像双刃剑。”代价可能不小。Facebook表示,由于人力和人工智能监控方面的费用增加,预计今年费用将比去年增长50%-60%。(Facebook并没有将内容审核费用与其他费用区分开计算。)

Facebook显然负担得起。Facebook今年排名《财富》美国500强第76位,虽然近来引发外界重重争议,财务上并未遭受重创。今年第一季度,Facebook的营业收入同比大增49%,达到120亿美元。爆出剑桥分析丑闻后,Facebook的股价大跌,市值一度蒸发1340亿美元,目前股价基本涨回丑闻之前的水平。长期追踪Facebook的资产管理公司Cowen Inc.高级研究分析师约翰·布莱基指出:“只有(监控)变成大量密集投资,才能明显影响Facebook的盈利。”

然而Facebook也不能忽视这块。如果因此流失一些烦躁的用户,Facebook还可以承受,如果吓跑了广告客户,Facebook就会遇到大麻烦,毕竟广告为Facebook贡献了98%的营业收入。Facebook表示,目前业务方面尚未受到明显影响,但早在剑桥分析丑闻曝光前,就有广告客户对假新闻和犯罪活动感到担心。加强内容管理可能给用户更安全的体验,也可能付出额外的代价。一旦Facebook收集的私人数据和海量信息都要接受审核,消费者还会不会信任Facebook,授权筛选自己的更多内容?“我们非常慎重地对待用户隐私,搭建系统时特别注意保护隐私。”罗森表示。可是,用户对Facebook清理内容了解越多,情况就会越发混乱。

春天的一个上午,在门洛帕克Facebook的23号办公楼里,30多位高层聚在一起讨论多个重要的议题,包括Facebook该不该将仇恨言论分类处理。这种沟通每两周举行一次,由公司的内容标准论坛召集,主要探讨是否要更新公司规定,判断哪些行为属于越线,令人不可接受。

五年前,Facebook指派曾任美国助理检察官的莫妮卡·比克特负责管理该论坛。比克特刚加入Facebook时担任安全团队的法律顾问。现在,她和罗森密切合作,确保Facebook开发的工具能准确执行团队制定的政策。后来两人与主管全球运营的副总裁贾斯汀·奥索夫斯基合作。奥索夫斯基的职责包括管控公司越来越庞大的内容审核团队。如果说Facebook网站上的有害内容就像垃圾桶里燃起的火苗,这三位管理者就是救火队领导。

比克特一头红发,身材高挑健美,身上有一种Facebook内部少见的率直气。她愿意公开谈论对很棘手的话题,比如平台上出现性犯罪者,斩首视频等。跟Facebook其他一些理想化高管不同,她对有些用户在Facebook上做坏事并不震惊。比克特说:“我们处理的滥用(Facebook)行为和你作为检控官平时在线下遇到的没什么两样。”

“我们在处理的滥用问题跟你平时在线下遇到的没什么两样。”

——莫妮卡·比克特:全球政策管理负责人

加入社交网络之前,比克特曾担任联邦检察官超过十年。目前她负责内容标准论坛。JESSICA CHOU FOR FORTUNE

Bickert’s team includes subject-matter experts and policy wonks whose credentials are as impressive as they are grim. (Think former counterterrorism specialists, rape crisis counselors, and hate-group researchers.) Their collective job is to develop enforceable policies to target and eradicate the nefarious activities they are all familiar with from the real world, while keeping the platform a bastion of (somewhat) free expression. “If Facebook isn’t a safe place, then people won’t feel comfortable coming to Facebook,” says Bickert.

Just defining impropriety is a tall order, however, especially on a global network. Take hate speech. The intention behind specific words can be tricky to parse: A Portuguese term might be considered a racial slur in Brazil, but not in Portugal. Language is also fluid: People in Russia and Ukraine have long used slang to describe one another, but as conflict between them has escalated in recent years, certain words have taken on more hateful meaning.

In an effort to be more transparent about its rules, Facebook in late April publicly released for the first time its entire, 27-page set of “community standards.” Some of its codes tackle racy content with an ultradry vocabulary. (“Do not post content that depicts or advocates for any form of nonconsensual sexual touching, crushing, necrophilia, or bestiality.”) Others are surprising for what they don’t ban. It’s okay to discuss how to make explosives, for example, if it’s for “scientific or educational purposes.” And while anyone convicted of two or more murders is banned from Facebook, a single homicide won’t get someone exiled from the land of the “Like” button—unless the individual posts an update about it. (The reason: While people may commit a single homicide accidentally or in self-defense, it is easier to establish intent with a multiple murderer; meanwhile, no users are allowed to promote or publicize crime of any kind.)

“They will not be definitions that every person will agree with,” Bickert says of the standards. “But [we want to] at least be clear on what those definitions are.” In the spirit of clarity, Facebook plans to host multiple “interactive” summits in the coming months to get feedback on its rules from the public and the press. Ultimately, though, it is up to the company to decide what it allows and what it bans—even in matters of life and death.

In the spring of 2017, Rosen put a team of engineers on “lockdown,” a Facebook practice in which people drop everything to solve a problem. The problem was dire indeed: People were using Facebook Live, a video-streaming service that had just launched, to announce their intention to kill themselves, and even to stream themselves doing it.

Dressed in a black T-shirt, jeans, and slip-on gray shoes, Rosen looks the part of a Silicon Valley techie-dude. But his casual demeanor belies the urgency with which his cross-disciplinary team of a few dozen took on the tragic issues on Facebook Live. “The purpose is to help accelerate work that’s already happening,” says the exec, seated in the same conference room where last year’s lockdown took place. The work that came out of the two-months-long period serves as a case study for how Facebook hopes to police content—and it hints at how powerful and pervasive those efforts could become.

Facebook doesn’t disclose the frequency of suicide attempts on its platform. But broader data hints at the scope of the problem. In the U.S. alone, about 45,000 people a year kill themselves, while some 1.3 million try to do so. The U.S. population stands at 325 million; Facebook’s user base tops 2.2 billion. “The scale at which Facebook is dealing with this has to be enormous,” says Dan Reidenberg, executive director of SAVE, a nonprofit aimed at raising suicide awareness.

During and after the lockdown, with Reidenberg’s help, Facebook designed policies to help those in need while reducing the amount of traumatic content on the platform—and the likelihood of “contagion,” or copycats. Company policy now states that Facebook removes content that “encourages suicide or self-injury, including real-time depictions of suicide,” but also that it has been advised not to “remove live videos of self-harm while there is an opportunity for loved ones and authorities to provide help or resources.”

That’s obviously a difficult distinction to make, which is one reason Facebook also brought on 3,000 moderators—one of its biggest expansions of that workforce—to sift through videos of at-risk users. To serve them, engineers developed better review tools, including “speed controls” that allowed reviewers to easily go back and forth within a Live video; automated transcripts of flagged clips; and a “heat map” that showed the point in a video where viewer reactions spike, a sign that the streamers might be about to do harm.

比克特的团队包括主题专家和专研政策的人,这群人资历都非常深,也相当严肃。(都是前反恐专家、强奸危机辅导员,还有仇恨团体研究员等。)他们的工作就是制定可执行的政策,更精准地锁定并铲除现实世界中非常熟悉的罪恶行为,同时保护平台(比较)自由表达的氛围。“如果Facebook不安全,人们来这里就会感觉不舒服,”比克特表示。

单是判定不合适内容就是个艰巨任务,尤其是在全球网络上。以仇恨言论为例。要分析出特定词汇背后的动机可能并不容易:葡萄牙语里某个词在巴西可能被当成种族歧视,但在葡萄牙就不会。语言还经常变化:俄罗斯人和乌克兰人一直用某些俚语称呼彼此,但近年来随着两国争端升级,某些词包含的恨意可能更强。

为了确保规则透明,4月底Facebook第一次完整公开宣布了27页的“社区规范”。其中一些规定非常直白地描述了禁止哪些不雅内容。(“不要发布描述或倡导任何违背意愿的性接触、毁尸、恋尸以及兽交等内容。”)其并未禁止的内容则令人惊讶,例如出于“科学和教育目的”,讨论制造爆炸物是可以的。曾犯下两次或更多谋杀案的人不可使用Facebook,但如果只杀过一次人,还是可以正常使用Facebook上“点赞”等各项功能,只有一种例外,就是此人发布有关犯罪的内容。(原因是:人们可能在意外情况下或出于自卫杀人,而多次犯下杀人罪就表示其人有犯罪故意;但用户不可以鼓吹或宣传任何形式的犯罪。)

“不可能所有人都认同,”比克特解释规范时表示。“但(我们希望)至少可以弄清楚定义。”为了进一步明确,Facebook还打算未来几个月组织多场“互动”峰会,收集公众和媒体对规则的反馈。不过到最后还是Facebook掌握决策权,甚至是生杀大权。

2017年春,罗森召集一群工程师“闭关”,即Facebook内部一群人放下手中任务专攻一项难题。这次的问题相当可怕,有人使用刚上线的Facebook Live直播功能宣布自杀,有些人甚至直播自杀过程。

罗森穿着黑色T恤,牛仔裤,还有灰色懒人鞋,看起来就是个普通的硅谷技术达人。但从闲散的外表看不出这次任务的紧急程度,他召集了各部门几十位工程师,专心解决Facebook直播问题。“目标是赶紧解决眼下的问题,”这位高管表示,接受采访时的会议室正是去年团队闭关的地方。经过两个月奋战,Facebook借此研究了内容监管,也了解到审查内容的权力可以多么强大和深入。

Facebook没有透露平台上用户宣布自杀的频率。但通过更多数据可看出问题的严重性。仅在美国,美国约有4.5万人自杀,130万人试图自杀。美国人口为3.25亿;Facebook用户有22亿。“Facebook面临问题的规模庞大,”帮人们了解自杀行为危害的非营利机构SAVE执行董事丹·雷登伯格表示。

闭关期间和之后,在雷登伯格帮助下,Facebook设计出相应政策帮助有需要的用户,减少平台上的自残内容,也降低“蔓延”,或模仿行为的几率。公司政策称,Facebook会删除“鼓励自杀或自残,包括实时描述自杀”的内容,但也表示有人建议不要“删除自我伤害的视频,因为关心该用户的人和有关部门得知后可以及时提供救援和帮助。”

删还是不删,确实难以取舍,所以Facebook聘请了3000名审核员负责筛选风险用户,也是最大规模的一次招聘。工程师开发了更方便的评估工具,包括“速度控制”,审核人员可以反复回看直播视频;自动记录下标记的视频片段;还有“热度地图”,可以显示视频中哪一时刻引发观众强烈反应,说明这时直播者可能准备实施伤害行为。

4月10日,扎克伯格在美国参议院出席听证会。TING SHEN — XINHUA NEWS AGENCY/GETTY IMAGES

Facebook says that all of its moderators receive ongoing training. The company gave Fortune a rare glimpse of material used to prep moderators (in this case, on what to do when dealing with content about “regulated goods” like prescription drugs and firearms), and it’s admirably extensive. Still, even armed with training and high-tech tools, reviewers in suicide-risk situations have an emotionally taxing and hugely impactful task—and whether they’re equipped to handle the weight of it is an open question. Like other tech companies that use content reviewers, including Twitter and YouTube, Facebook discloses little about their qualifications, or about how much they’re paid. The company does say that all are offered psychological counseling: “The reality of this work is hard,” admits Osofsky, who spoke with Fortune by phone while on paternity leave.

That makes the role of technology even more crucial. Facebook now deploys A.I. systems that can detect suicidal posts; the software searches for phrases like “Are you OK?,” alerts human reviewers, and directs resources to users it deems at risk. It can even alert a user’s friends and urge them to offer help. At some point soon, chatbots could act more directly, sending messages of concern and even automatically calling first responders.

Rosen says that since last year’s lockdown, Facebook has referred more than 1,000 suicide-risk cases worldwide to first responders. Each life saved is a profound achievement, and the increased reliance on software suggests there’s more progress to come. That’s why Reidenberg is optimistic about A.I. tools. “I believe that technology provides us the best hope of reducing the risk of suicide in the world,” he says. Still, he concedes, “This is uncharted territory.”

Assessing risks and parsing posts, on such a global scale, is indeed unprecedented. To do it effectively, Facebook will likely end up accessing and analyzing ever more of our data.

A.I. is already offering a radical shortcut, because it can sift through so much information in such little time. In cases of sexual exploitation and unlawful nudity, for example, software can already detect the presence of nipples. How do A.I. tools learn to do this? By studying lots and lots of photos—our photos—and looking for patterns. But while technology can ascertain an areola, it can’t distinguish between an acceptable depiction of the body part—breastfeeding pics—and so-called “revenge porn,” a major no-no on the platform.

Facebook称审核人员都会持续接受培训。Facebook还很难得让《财富》看了一眼审核员的培训材料(看到的内容是,遇到有关处方药和枪支等“管制物品”的内容时该如何处理),内容十分详尽。尽管可能接受过培训,也拥有高科技工具,审核有自杀风险的情况时,情绪还是可能承受巨大压力,审核员们到底能否准备好还很难说。像推特和YouTube之类同样要审核内容的科技公司一样,Facebook很少公布审核员的招聘标准,也不会透露薪水情况。不过Facebook称审核员都能获得心理辅导。“这工作真的不好干,”陪产假期间接受《财富》电话采访的奥索夫斯基承认道。

如此一来,技术就更加重要。如今Facebook用上人工智能系统检测自杀倾向的帖子;软件会搜索“你还好么?”之类对话,提醒人类审核员,向可能存在风险的用户提供安抚信息。软件还能提醒用户的朋友,提醒他们提供帮助。很快人工智能软件chatbots就能更直接发挥作用,发送关注信息,甚至自动呼叫急救人员。

罗森表示,去年闭关之后,Facebook已向世界各地急救人员报告1000多起有自杀风险的案件。每挽救一次生命都是重大进步,随着人们逐渐依靠软件,也说明进步明显。所以雷登伯格对人工智能工具非常乐观。“我相信利用技术才有希望在全世界降低自杀风险。”他表示。然而他也承认,“该领域还有很多不确定性。”

但在全球范围评估风险分析帖子确实是前所未有的工作。Facebook若想高效完成,很有可能需要访问分析用户更多数据。

人工智能已经省了很多力,因为软件可以在非常短时间里筛选海量信息。举例来说,在性奴役和非法裸露案例中,软件已经能甄别乳头。人工智能怎么学会?就是研究大量用户照片,寻找其中的规律。但是,就算技术能分辨出乳晕,仍然无法将人体正常描述,也就是哺乳图片与平台上大量禁止的所谓“色情复仇”准确区分开。

全球运营副总裁奥索夫斯基负责Facebook迅速壮大的内容审核团队,包括专门筛查仇恨言论和有自杀风险的帖子。JESSICA CHOU FOR FORTUNE

Where tech fails, human surveillance fills the gaps. Here, too, more information and more context can lead to more informed decision-making. Facebook’s content cops point out that its reviewers don’t have access to data that isn’t pertinent to the issue at hand. “The tools our content reviewers use provide a limited view and context based on the type of content that is being reviewed,” says Rosen. The implication: Facebook doesn’t have to know all your business to help you avoid hate speech or get help.

But where to draw that line—how much context is enough context—is a call Facebook will increasingly be making behind the scenes. Whether we can accept the tradeoff, giving the network more latitude to assess our data in exchange for safety, is a question too complex to answer with yes or no buttons.

This article originally appeared in the June 1, 2018 issue of Fortune.

技术做不到的地方,人工审核可以弥补。只要拥有更多信息,了解更多背景,决策就越准确。负责把关Facebook内容的人们也指出,审核人员并不掌握与当前处理信息无关的数据。“内容审核人员使用的工具会根据当前审核的内容,分类呈现有限的信息和背景,”罗森表示。换句话说:Facebook不需要完全了解你,就能帮你避开仇恨言论,也能提供需要的援助。

但幕后的Facebook越发需要决定,具体的线应该怎么划分,到底多少背景信息算足够。不管用户能否接受最终的处理标准,向社交网络提供更多个人信息换取安全都是个极为复杂的问题,不是简单点击是或否按钮那么简单。(财富中文网)

本文首发于2018年6月1日出版的《财富》杂志。

译者:Pessy

审校:夏林

 

阅读全文

相关阅读:

  1. 破解Facebook隐私乱局——对最有钱的用户收费
  2. 删除Facebook账号运动只是开始,接下来Facebook麻烦更多
  3. Facebook账号想删就能删? 其实没那么容易
  4. 库克强调科技业隐私问题,暗讽Facebook
返回顶部
#jsonld#