查看原文
其他

ChatGpt 创始人阿尔特曼最新讲话“Planning for AGI and byond”

译介 2023-06-20



由Open AI 开发的人工智能ChatGPT自推出后便大火,5天注册人数超百万,两月内用户活跃度破亿(微信用了14个月),ChatGPT携电子狂潮席卷全球,如台风过境般登陆了中国。这波热度还没过去,又传来ChatGPT创始人砸1.8亿美金(约人民币12亿)投资长寿科技公司的消息。他甚至直言:除了AI,我未来只对两个事情感兴趣:无限的能源和无限的寿命。以下是他的最新讲话:



Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity.

我们的使命是确保通用人工智能(AGI)——总体上比人类更聪明的人工智能系统——能够造福全人类。

If AGI is successfully created, this technology could help us elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility.

如果通用人工智能被成功创造出来,这项科技可以帮助我们建设更加富足的社会、推动全球经济高速增长,以及发现突破性的新科学知识,从而推动人类的发展。

AGI has the potential to give everyone incredible new capabilities; we can imagine a world where all of us have access to help with almost any cognitive task, providing a great force multiplier for human ingenuity and creativity.

通用人工智能可能赋予每个人不可思议的新能力;我们可以想象一个新的世界,在这个新的世界里面,所有人都可以在任何认知活动中获取帮助,人类的聪明才智和创造力因此得以成倍增长。

On the other hand, AGI would also come with serious risk of misuse, drastic accidents, and societal disruption. Because the upside of AGI is so great, we do not believe it is possible or desirable for society to stop its development forever; instead, society and the developers of AGI have to figure out how to get it right.

另一方面,通用人工智能可能会带来技术滥用、严重事故以及社会混乱的严重风险。鉴于通用人工智能的优势是如此之大,我们认为社会不可能或者不希望永远停止发展通用人工智能,而社会以及通用人工智能的开发者们要做的是为此找到正确的发展方法。

Although we cannot predict exactly what will happen, and of course our current progress could hit a wall, we can articulate the principles we care about most:

尽管我们不能准确预测未来会发生什么,而且我们在目前的前进道路上也有可能遭遇瓶颈,但是我们可以详细阐述我们最关心的几条原则:
We want AGI to empower humanity to maximally flourish in the universe. We don’t expect the future to be an unqualified utopia, but we want to maximize the good and minimize the bad, and for AGI to be an amplifier of humanity.

我们希望通用人工智能为人类赋能,使得人类在宇宙中得以最大程度地发展壮大。我们不希望我们的未来是一个彻头彻尾的乌托邦,而是希望扬长避短、趋利避害,并让通用人工智能成为人类进步的助推器。
We want the benefits of, access to, and governance of AGI to be widely and fairly shared.

我们希望通用人工智能的优势和使用能够得到公平且广泛的共享,并且大家也可以共同参与到通用人工智能的管理中来。
We want to successfully navigate massive risks. In confronting these risks, we acknowledge that what seems right in theory often plays out more strangely than expected in practice. We believe we have to continuously learn and adapt by deploying less powerful versions of the technology in order to minimize “one shot to get it right” scenarios.

我们希望能成功应对重大风险。我们承认,在面对这些风险时,理论上看似正确的事情在实践中往往不尽如人意。我们相信,在部署这项技术时应避免使用最强大的版本,我们需要不断学习和适应,尽量避免“一劳永逸”的想法。


The short term


短期展望


There are several things we think are important to do now to prepare for AGI.


为了做好准备,迎接通用人工智能的到来,以下是我们认为重要的几件事情。


First, as we create successively more powerful systems, we want to deploy them and gain experience with operating them in the real world. We believe this is the best way to carefully steward AGI into existence—a gradual transition to a world with AGI is better than a sudden one. We expect powerful AI to make the rate of progress in the world much faster, and we think it’s better to adjust to this incrementally.


首先,当我们不断创造出更强版本的系统时,我们希望通过在现实世界中部署和运行来获得经验。我们相信,这是小心管理通用人工智能直至其存在于世的最佳方式——即,逐步过渡到具有通用人工智能的世界,而不是一夜之间改弦更张。我们预计,强大的人工智能可以加速世界的进步,而我们认为最好是逐步地适应这种变化。


A gradual transition gives people, policymakers, and institutions time to understand what’s happening, personally experience the benefits and downsides of these systems, adapt our economy, and to put regulation in place. It also allows for society and AI to co-evolve, and for people collectively to figure out what they want while the stakes are relatively low.

逐步过渡的方式可以使大众、政策制定者以及机构有时间去理解正在发生的事情,亲身体验这些系统所带来的好处和坏处,从而可以调整经济结构,制定监管政策。并且逐步过渡也可以让社会和人工智能一起进化,在利害影响还尚小之时,让大众共同决定他们到底想要的是什么。

We currently believe the best way to successfully navigate AI deployment challenges is with a tight feedback loop of rapid learning and careful iteration. Society will face major questions about what AI systems are allowed to do, how to combat bias, how to deal with job displacement, and more. The optimal decisions will depend on the path the technology takes, and like any new field, most expert predictions have been wrong so far. This makes planning in a vacuum very difficult.

我们目前相信,为了成功应对部署人工智能所带来的挑战,最佳的方式是建立一个包含快速学习和谨慎迭代的紧密的反馈闭环。社会将面临一些重大问题,例如,对人工智能应用的限制问题、如何解决算法偏见问题、如何处理工作岗位被取代问题等等。对于这些问题,最优的决定将取决于这项技术的发展之路,而就像任何新的领域一样,大多数专家的预言截止目前都是错误的。正因如此,要凭空做计划是非常困难的。

Generally speaking, we think more usage of AI in the world will lead to good, and want to promote it (by putting models in our API, open-sourcing them, etc.). We believe that democratized access will also lead to more and better research, decentralized power, more benefits, and a broader set of people contributing new ideas.

总体而言,我们认为在世界上更广泛地应用人工智能将会带来良性的结果,所以我们希望推广这项技术(如,通过将模型嵌入API、开放源代码等)。我们相信全民化应用会促进更多、更好的研究,避免权力集中化,带来更多好处,并使更多人可以贡献新的想法。

As our systems get closer to AGI, we are becoming increasingly cautious with the creation and deployment of our models. Our decisions will require much more caution than society usually applies to new technologies, and more caution than many users would like. Some people in the AI field think the risks of AGI (and successor systems) are fictitious; we would be delighted if they turn out to be right, but we are going to operate as if these risks are existential.

随着我们的系统越来越接近通用人工智能,在创建和部署我们的模型上,我们变得越来越谨慎。我们在决策时将需要更加谨慎,这种谨慎程度比通常社会在面对新科技时更为甚之,比许多用户所预想的也更为甚之。人工智能领域的一些人认为通用人工智能(以及后续系统)带来的风险只是子虚乌有;我们倒是希望事实证明他们是对的,但我们仍将把这些风险当作是存在的并谨慎处之。

At some point, the balance between the upsides and downsides of deployments (such as empowering malicious actors, creating social and economic disruptions, and accelerating an unsafe race) could shift, in which case we would significantly change our plans around continuous deployment.

到某一个时间点,部署此技术所带来的正面和负面影响之间的平衡点可能发生变化(比如,鼓励恶意行为、造成社会经济的动荡、加剧不安全的竞争等),在这样的情况下,我们将会对持续部署计划进行大刀阔斧的更改。

Second, we are working towards creating increasingly aligned and steerable models. Our shift from models like the first version of GPT-3 to InstructGPT and ChatGPT is an early example of this.

其次,我们正在努力创建更加匹配、可控的模型。比如,我们的模型从第一版的GPT-3进化到InstructGPT再到ChatGPT,这就是一个早期的例子。

In particular, we think it’s important that society agree on extremely wide bounds of how AI can be used, but that within those bounds, individual users have a lot of discretion. Our eventual hope is that the institutions of the world agree on what these wide bounds should be; in the shorter term we plan to run experiments for external input. The institutions of the world will need to be strengthened with additional capabilities and experience to be prepared for complex decisions about AGI.

特别是,我们认为,在人工智能应用极为广泛的边界这一问题上,全社会达成一致是非常重要的,但在此边界内,个人用户要有很大的自由裁量权。我们最终的希望是,世界各地的机构就这些广泛的边界达成一致;在短期内,我们计划开展一些试验,以听取外部的意见。世界各地的机构将需要额外增强其能力和经验,以便为有关通用人工智能的复杂决策做好准备。

The “default setting” of our products will likely be quite constrained, but we plan to make it easy for users to change the behavior of the AI they’re using. We believe in empowering individuals to make their own decisions and the inherent power of diversity of ideas.

我们产品的“默认设置”可能会受到很大限制,但我们计划让用户可以方便地改变他们正在使用的人工智能的行为。我们希望赋予个人自主决策的权力,并且坚信思想多样性的内在力量。

We will need to develop new alignment techniques as our models become more powerful (and tests to understand when our current techniques are failing). Our plan in the shorter term is to use AI to help humans evaluate the outputs of more complex models and monitor complex systems, and in the longer term to use AI to help us come up with new ideas for better alignment techniques.

随着我们的模型变得更加强大,我们将需要开发新的匹配技术(以及通过测试了解我们当前的技术何时失效)。我们短期的计划是利用人工智能去评估更多复杂模型的输出以及监测复杂的系统,从长远的角度来看,我们可以利用人工智能来帮助我们获得新的方案,以实现更好的匹配技术。

Importantly, we think we often have to make progress on AI safety and capabilities together. It’s a false dichotomy to talk about them separately; they are correlated in many ways. Our best safety work has come from working with our most capable models. That said, it’s important that the ratio of safety progress to capability progress increases.

重要的是,我们认为,人工智能的安全性与功能性常常需要齐头并进。将其安全性和功能性一分为二是错误的;它们在很多方面都是息息相关的。我们最好的安全成果来自于我们功能性最强大的模型。也就是说,安全性增长和功能性增长之间成正相关是一件很重要事情。

Third, we hope for a global conversation about three key questions: how to govern these systems, how to fairly distribute the benefits they generate, and how to fairly share access.

第三,我们希望在这三个关键问题上开展全球化的讨论:如何管理这些系统,如何合理分配这些系统所带来的好处,如何合理分配使用权。

In addition to these three areas, we have attempted to set up our structure in a way that aligns our incentives with a good outcome. We have a clause in our Charter about assisting other organizations to advance safety instead of racing with them in late-stage AGI development. We have a cap on the returns our shareholders can earn so that we aren’t incentivized to attempt to capture value without bound and risk deploying something potentially catastrophically dangerous (and of course as a way to share the benefits with society). We have a nonprofit that governs us and lets us operate for the good of humanity (and can override any for-profit interests), including letting us do things like cancel our equity obligations to shareholders if needed for safety and sponsor the world’s most comprehensive UBI experiment.

除了这三个领域的问题之外,我们还试图建立这样一个组织结构,使得我们的激励机制可以与良性的结果相挂钩。我们的公司章程中有一个条款规定,在通用人工智能开发的后期,我们要帮助其他组织提高安全性,而不是与之竞争。我们给股东回报限定了上限,所以我们不会永无止境地去追逐效益并部署一些可能带来巨大破坏的危险系统(当然这也是与社会分享福祉的一种方式)。我们受一个非营利机构管理,其要求我们运营的宗旨是提升人类的福祉(并且这一点凌驾于任何营利利益之上),包括在考虑安全需求的情况下可以取消对我们股东承担的股权义务,并且可以赞助世界上最全面的无差别基本收入(UBI)实验。

We think it’s important that efforts like ours submit to independent audits before releasing new systems; we will talk about this in more detail later this year. At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models. We think public standards about when an AGI effort should stop a training run, decide a model is safe to release, or pull a model from production use are important. Finally, we think it’s important that major world governments have insight about training runs above a certain scale.

我们认为在发布新的系统之前,类似我们这样的人工智能项目需要得到独立的审计;我们将在今年晚些时候更详细地讨论这个问题。到某个时间点,未来的系统在开始训练之前应获得独立的审查,并且,在最先进的项目中应就创建新模型所消耗的计算量的增长速度限值达成一致,这一点可能很重要。我们认为,在通用人工智能项目中,何时应该停止训练运行、何时决定模型可以安全发布或何时将模型从生产应用中撤出,此类公共标准也非常重要。最后,我们认为同样重要的是,世界主要国家政府应该对一定规模以上的人工智能训练运行有深刻的认识。

The long term

长期展望

We believe that future of humanity should be determined by humanity, and that it’s important to share information about progress with the public. There should be great scrutiny of all efforts attempting to build AGI and public consultation for major decisions.

我们认为人类的未来应该由人类自己决定,所以,我们应该与公众分享技术进展情况。任何通用人工智能开发项目都应该接受严格的审查,并应就重大决策进行公开咨询。

The first AGI will be just a point along the continuum of intelligence. We think it’s likely that progress will continue from there, possibly sustaining the rate of progress we’ve seen over the past decade for a long period of time. If this is true, the world could become extremely different from how it is today, and the risks could be extraordinary. A misaligned superintelligent AGI could cause grievous harm to the world; an autocratic regime with a decisive superintelligence lead could do that too.

第一个通用人工智能将只是智能技术发展进程中的一个点。我们认为,从这个点开始,发展仍将继续,并且很有可能在未来很长一段时间内,保持与过去十年相同的发展速度。如果这是真的,那么未来的世界将会变得与现在截然不同,其所带来的风险可能也是异常巨大的。一个错误匹配的超级通用人工智能会对世界造成严重破坏;而一个被有决断力的超级智能领导的专制政权也会对世界造成严重破坏。

AI that can accelerate science is a special case worth thinking about, and perhaps more impactful than everything else. It’s possible that AGI capable enough to accelerate its own progress could cause major changes to happen surprisingly quickly (and even if the transition starts slowly, we expect it to happen pretty quickly in the final stages). We think a slower takeoff is easier to make safe, and coordination among AGI efforts to slow down at critical junctures will likely be important (even in a world where we don’t need to do this to solve technical alignment problems, slowing down may be important to give society enough time to adapt).

可以加快科学发展的人工智能是一个值得我们深思的特殊案例,它也许比其他任何技术都更有影响力。由于通用人工智能有能力加速自身的进步,因此可能会以惊人的速度催生巨大的变革(即使这样的过渡开始发生时缓慢,但我们预计在后期速度会很快)。我们认为缓慢的起步更便于确保安全,并且通用人工智能项目之间要能够通过协调努力在关键时刻使速度降下来,这一点可能很重要(即使我们不需要这样做来解决技术匹配问题,但降速可能对给予社会足够的时间来适应很重要)。

Successfully transitioning to a world with superintelligence is perhaps the most important—and hopeful, and scary—project in human history. Success is far from guaranteed, and the stakes (boundless downside and boundless upside) will hopefully unite all of us.

成功过渡到一个拥有超级智能的世界可能是人类历史上最重要的(也是最有希望且最可怕的)一项工程。我们现在远谈不上能够保证取得成功,而其中的利害影响(无限的不利因素和无限的有利因素)有望将我们所有人团结起来。

We can imagine a world in which humanity flourishes to a degree that is probably impossible for any of us to fully visualize yet. We hope to contribute to the world an AGI aligned with such flourishing.

我们可以想象这样一个世界,在这个世界里,人类的繁荣程度可能是我们现在任何人都无法完全预见到的。而我们希望为这样繁荣的世界贡献一种与之匹配的通用人工智能。
文章来源:

https://openai.com/blog/planning-for-agi-and-beyond


*对应译文由译介翻译团队完成,仅供参考,不当之处欢迎大家在评论区讨论!转载请注明来源!



- THE END -


 往期外刊精读


公开信双语全文-Pause Giant AI Experiments
最多60分钟!TikTok“限制”未成年使用时间
双语对照全文|ChatGPT:OpenAI宪章
哈佛医学院:减肥失败是你eating dinner too...
爱抠鼻容易痴呆?原来抠鼻不叫dig nose...
CNN:张大千的画作拍卖价,为何超越梵高?
阿波罗计划带给硅谷的,不只有科技
假期熬的夜能补回来吗?免疫细胞:不能!
时尚新概念:什么是wellbeing wardrobe?
最新研究:老年人比年轻人心理更健康!
得州一公园现1.13亿年前恐龙脚印
牛津大学:加快能源转型可节省12万亿
冷知识:50元买的冰淇淋,一半是空气…
《自然》子刊:补维D番茄就可以
你讨厌的“已读”功能,老外也想让它消失…
任天堂新游戏Splatoon 3,老外这么评价…
加州新规禁售燃油车,马斯克“终结论”要应验?
Disney+上线首批R级电影,美国家长怒了…
谷歌AI意识觉醒?工程师网上爆料遭解雇
Facebook为何遭到批评? 
你曾被医生的话“伤害”到吗?
外卖app真能做到超快配送吗?
最危险的奶酪,这是地狱美食吧...
美婴儿奶粉“一罐难求”家长怒斥商家哄抬价格
Gucci拥抱加密货币,要做“元宇宙第一奢侈品”?
如何用英文表达“权宜之计”?-附长难句解析
首个接受猪心脏移植患者,或因猪病毒而死
存在血栓风险,美FDA限制强生疫苗使用
明年起,新加坡允许单身女性“冻卵” 
哈佛大学出资1亿美元赎罪!
最新突破!“治愈癌症”更进一步!
SpaceX升空!国际空间站迎来首位黑人女性!
数据表明,女性当母亲后工资会下降
招聘新趋势:互玩ghosting?
首例女性艾滋病治愈者诞生!如何地道翻译“侥幸”?
喜欢宅家是种叫做Hikikomori的病?
「做四休三」离我们究竟还有多远?
什么是EDG?吓得宿管阿姨一脸懵逼!
《权游》烂尾编剧要对《三体》下手了!
丹叔版007绝唱《无暇赴死》
小米"孕育"了日语、韩语、土耳其语...
清华北大全球排第几?US NEWS 最新权威排名出炉!
《老友记》最爱瑞秋的Gunther甘瑟去世!
王亚平带多少护肤品上太空?来例假如何应对?
从一个难民到诺贝尔文学奖得主!
李子柒停更疑云,国外网友急了! 

您可能也对以下帖子感兴趣

文章有问题?点此查看未经处理的缓存