财富中文网 >> 商业

你将很快有一位机器人同事

分享: [译文]

 

Microsoft founder Bill Gates recently suggested that robots primed to replace humans in the workplace should be taxed. While Gates’s proposal received a mixed reception, it mainly served to stoke an erroneous narrative that humans need to fear robots stealing their jobs.

The whole idea of implementing a robot tax is premature, though not quite 50 to 100 years in the future, as Treasury Secretary Steven Mnuchin believes. Before we can start talking about the capitalization of artificial intelligence (AI) and taxing robots, we need to investigate, decipher, and tackle serious challenges in the way of making robots work effectively for the general consumer and in the workplace.

Robots will be able to perform tasks that significantly impact the traditionally human workforce in irreversible ways within the next five years. But first, people who build and program all forms of AI need to ensure their wiring prevents robots from causing more harm than good.

It remains to be seen how important maintaining a human element—managerial or otherwise—will be to the success of departments and offices that choose to employ robots (in favor of people) to perform administrative, data-rich tasks. Certainly, though, a superior level of humanity will be required to make wide-ranging decisions and consistently act in the best interest of actual humans involved in work-related encounters in fully automated environments. In short, humans will need to establish workforce standards and build training programs for AI and robots geared toward filling ethical gaps in robotic cognition.

Enabling AI and robots to make autonomous decisions is one of the trickiest areas for technologists and builders to navigate. Engineers have an occupational responsibility to train robots with the right data in order for them to make the right calculations and come to the right decisions. Particularly complex challenges could arise in the areas of compliance and governance.

Humans need to go through compliance training in order to understand performance standards and personnel expectations. Similarly, we need to design robots and AI with a complementary compliance framework to govern their interactions with humans in the workplace. That would mean creating universal policies covering the importance of equal opportunity and diversity among the human workforce, enforcing anti-bribery laws, and curbing all forms of fraudulent activity. Ultimately, we need to create a code of conduct for robots that mirrors the professional standards we expect from people. To accomplish this, builders will need to leave room for robots to be accountable for, learn from, and eventually self-correct their own mistakes.

AI and robots will need to be trained to make the right decisions in a countless number of workplace situations. One way to do this would be to create a rewards-based learning system that motivates robots and AI to achieve high levels of productivity. Ideally, the engineer-crafted system would make bots “want to” exceed expectations from the moment they receive their first reward.

Under the current “reinforcement learning” system, a single AI or robot receives positive or negative feedback depending on the outcome generated when it takes a certain action. If we can construct rewards for individual robots, it is possible to use this feedback approach at scale to ensure that the combined network of robots operates efficiently, adjusts based on a diverse set of feedback, and remains generally well-behaved. In practice, rewards should be built not just based on what AI or robots do to achieve an outcome, but also on how AI and robots align with human values to accomplish that particular result.

But before we think about taxing robots and AI, we need to get the basics of the self-learning technology right, and develop comprehensive ethical standards that hold up for the long term. Builders need to ensure that the AI they are creating has the ability to learn and improve in order to be ethical, adaptable, and accountable prior to replacing traditionally human-held jobs. Our responsibility is to make AI that significantly improves upon work humans do. Otherwise, we will end up replicating mistakes and replacing human-held jobs with robots that have ill-defined purpose.

Kriti Sharma is the vice president of bots and AI at Sage Group.

微软创始人比尔·盖茨最近表示,设计用于在职场中取代人类的机器人应该纳税。虽然盖茨的说法得到的评论褒贬不一,但却煽动起了一种错误的观点,即人类需要担心机器人抢走他们的工作。

美国财政部长史蒂芬·姆努钦认为,虽然现在执行机器人税为时尚早,但在未来50至100年内,这或许将成为现实。在开始谈论人工智能(AI)资本化和向机器人征税之前,我们首先需要调查、分析和解决有哪些严重的问题,会妨碍机器人有效服务于普通消费者和应用于职场。

在未来五年内,机器人能够执行的任务,将对一些传统的人类工种产生不可逆的重要影响。但首先,负责各种人工智能设计与编程的人员,需要确保他们设计的线路能够保证机器人更多带来的是好处,而不是伤害。

如果部门或办公室选择使用机器人(使人类受益)处理行政类、涉及大量数据的任务,那么保留管理等人类元素,对于它们的成功会有多大的重要性,我们仍有待观察。但可以肯定的是,在完全自动化的环境中,机器人做出的广泛决策与行为,应该符合参与工作相关接触的人类的最佳利益,而这需要机器人具备更高的人性水平。简而言之,人类必须为人工智能和机器人制定劳工标准和培训程序,以填补机器人认知中的道德空白。

使人工智能和机器人可以自主决策,是技术人员和设计师们面临的最棘手的问题之一。用正确的数据培训机器人,使它们可以进行正确的计算和决策,这是工程师的职业责任。在合规与治理领域尤其可能出现复杂的挑战。

人类需要接受合规培训,以了解绩效标准和人事部门的期望。同样,我们需要在一个互补的合规框架内设计机器人和人工智能,以管理它们在职场中与人类的交互。这意味着制定通用的政策,涵盖人类劳动力的平等机会与多元化等重要方面,强制执行反腐败法律,控制各种形式的欺诈活动。最终,我们需要以希望人类达到的职业标准为蓝本,制定机器人行为规范。与此同时,设计师们还需要为机器人留出空间,为自己的错误承担责任,从中总结经验教训并最终实现自我纠正。

人工智能和机器人都需要接受培训,学会如何在不计其数的职场状况下做出正确的决定。培训的一种方式是创建一个基于奖励的学习系统,激励机器人和人工智能实现高生产率。理想情况下,这种由工程师精心创造的系统,将使机器人从收到第一次奖励开始,“想要”超出预期。

按照当前的“强化学习”系统,一个人工智能或机器人根据其特定行动所产生的结果,会收到正面或负面的反馈。只要我们能够为个别机器人设计奖励,就能将这种反馈方法推而广之,确保综合的机器人网络能够高效率运行,根据不同反馈做出调整,并基本保持机器人的良好行为。在实践中,设计奖励时不仅要根据人工智能或机器人为实现一个结果所采取的行为,还应该考虑人工智能与机器人实现特定结果的做法是否符合人类的价值观。

但在考虑对机器人和人工智能征税之前,我们必须正确了解自我学习技术的基本知识,制定出综合的道德标准,并长期坚持执行。机器人制造企业首先需要保证,他们创造的人工智能通过不断学习和完善,最终能够符合人类的道德伦理,具备适应能力,可以对自己的行为负责,之后才能让它们在一些传统工作领域取代人类。我们的责任是使人工智能可以显著改进人类从事的工作。否则,我们只是在重复错误,用目的不明确的机器人取代了人类的工作。(财富中文网)

本文作者克里特·沙玛为Sage Group的机器人与人工智能副总裁。  

译者:刘进龙/汪皓

阅读全文

相关阅读:

  1. 谷歌发现,机器人可以变得很暴力
  2. 比尔·盖茨:机器人应该像人一样纳税
  3. 这些机器人能搞定你的“家务事”
返回顶部
#jsonld#