火箭发射问题
Some philosophers and artificial intelligence (AI) researchers like to use the metaphor of the rocket as way to describe the coming age of superintelligence, a hypothetical form of synthetically created intelligence that will far surpass the intelligence of even the most gifted humans.
小号青梅哲学家和人工智能(AI)的研究人员喜欢用火箭的比喻作为的方式来描述即将到来的时代超级智能的,合成产生的智力的一个假设的形式,将远远超过即使是最有天赋的人的智力。
A rocket, as the argument goes, is what we are building as we proceed towards the creation of superintelligence. Once completed, the rocket will take off in a thunderous cloud, shake the earth and leave us all behind, literally. Unless we change our approach to developing artificial intelligence that is. If we only re-imagined the development of superintelligence to include robust steering, mankind would not have to fear being overpowered by future AI as Max Tegmark explains in an inspirational TED talk.
正如论证所言,火箭是我们朝着超级智能的发展前进的过程中正在建造的东西。 一旦完成,火箭将在雷鸣般的云层中起飞,摇动大地,使我们全都落在后面。 除非我们改变开发人工智能的方法。 如果我们仅将超级智能的发展重新想象为包括强大的转向功能,那么人类将不必担心被未来的AI所压倒,正如Max Tegmark在鼓舞人心的TED演讲中解释的那样。
Before we fret about the thunderous takeoff, it might make sense to have another look at that rocket.
在我们为雷鸣般的起飞而烦恼之前,重新看一下该火箭可能是有意义的。
Metaphors are only resting places for thoughts, yet they are powerful in subtle ways because they can guide thinking from afar; they can convince without having to prove a point. They can achieve the power of a meme, becoming formative and normative as well as blocking out other imaginaries while drawing little attention to themselves.
隐喻只是思想的安息之地,但是它们以微妙的方式具有强大的作用,因为它们可以指导远方的思想。 他们可以说服自己,而不必证明自己的观点。 他们可以实现模因的力量,成为形成性和规范性的对象,并阻止其他虚构的事物,而很少引起人们的注意。
The rocket — and Rocket AI as its controlling agent — is an apt metaphor for the potential disruptiveness of a new technology, yet it is a terrible choice for thinking about how we might want to not only survive but live well with a game-changing technology. The rocket metaphor is not only a viable vehicle for escaping from a dull planet earth, as it is also a useful mechanism by which to escape responsibilities on earth. It is an effective metaphor to support a separation between the development of technical systems and living with them in the long-term; it is the perfect vehicle by which to justify innovation as an elite practice and to keep the development of technologies separate from the messy social world in which they inevitably operate.
火箭-并以Rocket AI作为其控制代理人-是对新技术潜在破坏力的恰当比喻,但它是思考我们如何不仅要生存而且要与改变游戏规则的技术过得很好的可怕选择。 火箭比喻不仅是逃避沉闷的地球的可行工具,而且还是逃避地球责任的有用机制。 这是一个有效的隐喻,可以支持技术系统的开发与长期使用它们之间的分离。 它是证明创新作为一种精英实践的理由并将技术的发展与它们不可避免地在混乱的社会世界中分离开来的理想工具。
Rocket AI goes boldly where no one has gone before. The fuel that powers Rocket AI is produced not only from inspirational metaphors, but also from success stories. Algorithms beat people at the classic games of chess, GO, and the new multiplayer shooting games; computers sound like people and algorithmically generated images look just like real people. These technical achievements fire up the escape velocity mentality of Rocket AI as pre-launch events offering a glimpse of the coming age of superintelligence. Is it a coincidence that industry uses the term moonshots for ambitious projects with out-of-this-world growth potential? Is it a coincidence that some of the most successful AI-centric entrepreneurs are heavily invested in a billionaires’ space race?
火箭AI勇往直前。 激励火箭AI的动力不仅来自鼓舞人心的隐喻,而且来自成功故事。 在经典的国际象棋,GO和新的多人射击游戏中,算法击败了人们。 计算机听起来像人,算法生成的图像看起来像真实的人。 这些技术成就激发了火箭AI的逃逸速度心态,因为它们预发布了一些事件,这些事件使人们可以一窥超级智能时代的到来。 行业将“ moonshots ”一词用于具有超越世界的增长潜力的雄心勃勃的项目,这是一个巧合吗? 一些最成功的以AI为中心的企业家在亿万富翁的太空竞赛中投入大量资金是否是巧合?
The rush to launch demands a price. The Rocket AI mentality has produced an artificial intelligence ecology that is not properly prepared to deal with the messiness of earthly existence. And we find ourselves tasked with creating postquem escape capsules and emergency pods to fix the deficiencies of Rocket AI; patches for privacy, transparency, accountability and shared governance in everyday AI systems. This makes for really bad privacy, transparency and accountability; literally an afterthought of Rocket AI. And it renders the promises of AI for a better Planet Earth unbelievable and counterintuitive; why worry about this planet if your efforts are geared towards building a machine to leave it behind?
急于发射需要付出代价。 火箭AI心态产生了一种人工智能生态系统,该生态系统没有为应对尘世生活的混乱做好充分准备。 而且,我们发现自己肩负着制造后遗症逃生舱和应急吊舱的任务,以解决Rocket AI的缺陷; 日常AI系统中的隐私,透明度,问责制和共享治理补丁。 这确实造成了糟糕的隐私,透明度和责任感; 字面意思是Rocket AI的事后思考。 而且,这使人工智能对改善地球的承诺令人难以置信且违反直觉。 如果您的努力致力于制造一台机器以抛弃它,为什么还要为这个星球担心呢?
Even within the AI community, the rocket story serves as more than a motivational sales pitch. The specter of an inadequately prepared launch into the unknown serves as justification for the research agenda of a precursor to superintelligence, namely artificial general intelligence and specifically the value alignment problem. The goal of alignment problem research is to ensure that superintelligence “wants the same things that humans want”; i.e. that superintelligent machines and not so superintelligent humans are somehow aligned in their thinking and actions. Failing to ensure this alignment, the argument goes, would allow for superintelligence to develop along whichever trajectory it deems most expedient to fulfill its own requirements, disregarding human needs and possibly doing away with its pesky human creators altogether. As if superintelligence would care what we feeble humans want. And even if the alignment machinery can somehow be built, it seems likely that superintelligence born of Rocket AI mentality will align nicely with only a few of us.
即使在AI社区中,火箭的故事也不仅仅是激励销售。 准备不充分的发射进入未知世界的幽灵可作为超智能先驱研究议程的理由,即人工智能 ,特别是价值取向问题。 对准问题研究的目的是确保超级智能“想要人类想要的东西”。 也就是说,超级智能机器而不是超级智能人类在思想和行动上都保持了某种联系。 该论点认为,如果无法确保这种一致性,超级智能将沿着它认为最适合自己需求的轨迹发展,而无视人类的需求,并可能完全消除讨厌的人类创造者。 似乎超级智能会关心我们无能为力的人类想要的东西。 即使可以以某种方式构建对齐机制,但由Rocket AI心态产生的超级智能似乎也只能与我们中的少数几个很好地对齐。
Not everyone is convinced of the value of worrying about the advent of superintelligence. Andrew Ng has called for an end to this nonsense and instead wants more research on the “urgent problems”, to wit, discrimination, bias, job loss, etc, etc. Ng is not the only AI expert calling for a refocus. Yet the way forward is contested. A conference venue dedicated to the socio-technical dimensions of AI finds its own research contributions critiqued for applying superficial technology fixes where fundamental policy revisions are needed.
并非所有人都相信担心超级智能技术的到来的价值。 Ng呼吁结束这种胡说八道 ,取而代之的是希望对“紧急问题”进行更多研究,以解决机智,歧视,偏见,失业等问题。Ng并不是唯一需要重新关注的AI专家。 然而,前进的道路还是有争议的。 一个专门针对人工智能的社会技术层面的会议场所发现了自己的研究成果,这些研究成果被批评为在需要对基本政策进行修订的情况下应用浅薄的技术修复方法 。
But policy interventions are not a panacea for Rocket AI’s ills either.
但是政策干预也不是解决Rocket AI问题的灵丹妙药。
AI as we know it is just not designed to deal with the tangled human world. When problems get prepared for algorithmic treatment they are made representational to the computer; a process that can strip out many of the details that make a given problem significant in the first place. Yet the very success of AI to generalize across domains rests precisely on its ability to do away with worldly details. Even the most basic operation in computer science, the assignment of numbers inherited from mathematics, is an abstraction from reality that sacrifices nuance. But that sacrifice is intentional, as it allows for numbers to be disassociated from objects. Or, as Alfred Whitehead put it, ‘the first man who noticed the analogy between a group of seven fishes and a group of seven days made a notable advance in the history of thought”1.
我们知道,AI并非旨在处理纠结的人类世界。 当问题准备好进行算法处理时,它们就可以代表计算机了; 一个可以消除许多细节的过程,这些细节首先会使给定的问题变得很重要。 然而,人工智能在各个领域进行泛化的成功取决于其消除世俗细节的能力。 即使是计算机科学中最基本的操作,即从数学继承的数字分配,也是对现实的抽象,它牺牲了细微差别。 但是这种牺牲是有意的,因为它允许数字与物体分离。 或者,正如阿尔弗雷德·怀特海德(Alfred Whitehead)所说,“第一个注意到一组7条鱼与一组7天的类比的人在思想史上取得了显着进步” 1 。
Without abstraction, computer science would lose its ability to generalize with ease.
没有抽象,计算机科学将失去其轻松推广的能力。
So how far down do we have to dig to fix this the problem? Do we need a new approach to synthetic intelligence or just a refocus of AI? That debate will persist for a long time. In the interim, while the list of things that AI can’t fix grows, I suggest a preparatory step.
那么,我们必须挖掘多远才能解决此问题? 我们是否需要一种新的综合智能方法或只是重新关注AI? 该辩论将持续很长时间。 在此期间,虽然AI无法解决的问题列表不断增加,但我建议您采取准备工作。
We need a different guiding metaphor for AI.
对于AI,我们需要一个不同的指导隐喻。
There is of course a history of alternate computing imaginaries. Most famously perhaps, Mark Weiser imagined computing as a walk in the woods. But he was probably walking all alone in that imagined forest, and the subsequent development of ubiquitous, ambient and other early alt-AI approaches all became entangled in the trappings of progress for a select few.
当然,还有替代计算虚构的历史。 马克·韦瑟(Mark Weiser)也许是最著名的,他把计算想象成在树林里散步 。 但是他可能独自一人走在那想象中的森林里,随后无处不在,周围环境和其他早期alt-AI方法的发展都陷入了针对少数人的进步陷阱。
It is time to update the list of imaginaries. What if we imagined future AI not as a rocket but as a caravan?
现在该更新虚构列表了。 如果我们将未来的AI想象成不是火箭而是大篷车,该怎么办?
More specifically, AI machinery could be imagined as a caravan of minivan-like pods on a long trip to a better place. And each pod would serve as a temporary home to a few people; a family in one, a few elderly folks in another, a boy and his dog in the next one; lots of equipment, supplies and baggage throughout. All the pods are connected to each other such that the fastest can only move along as quickly as the slowest. The caravan would be a state of the art mobility platform. A people-mover optimized by best practices for safety; both the safety of the passengers and of creatures watching the caravan of minivans move across the landscape.
更具体地说,可以将AI机械想象成是长途跋涉到更好地方的类似厢型货车的旅行车。 每个吊舱将作为几个人的临时住所; 一个家庭,一个老年人,另一个,一个男孩和他的狗; 大量的设备,用品和行李。 所有吊舱都相互连接,因此最快的吊舱只能与最慢的吊舱一样快地移动。 大篷车将是最先进的出行平台。 通过安全最佳实践优化的人员移动系统; 无论是乘客的安全还是观看小型货车旅行车的生物的安全,都可以穿越景观。
If we use the caravan as a model we might think about an AI enabled trip in a different way, and about what happens while we are en route to our destination. Instead of slumbering in a rocket’s hibernation chamber, we might be looking outside the window while in the caravan. We might spend more time thinking about the comfort of the passengers, the various not-really-necessary-but-important unplanned stops we will have to make along the way etc., etc. We will think about the technical device as an experience-enabler and be in a better position to concentrate on what the trip wants to achieve, where we are going, and what we want to do when we arrive at our destination. We will think about possible complications at the planning stages instead of building patches after the fact.
如果我们以旅行车为模型,我们可能会以不同的方式考虑启用AI的旅程,以及在途中到达目的地时会发生什么。 与其在火箭的冬眠室里打s,不如在大篷车里看着窗外。 我们可能会花更多的时间来考虑乘客的舒适度,沿途必须进行的各种非必要但重要的计划外停车等,等等。我们将技术设备视为一种体验,推动者,并且处于更有利的位置,可以专注于行程要实现的目标,我们要去的地方以及到达目的地后想要做的事情。 我们将在计划阶段考虑可能的复杂性,而不是事后构建补丁。
Caravan AI might not seek first to devise an algorithm, optimize it for performance on the most advanced computing platforms, and then later deploy it onto the ‘real’ world. Rather, it might first look at the ‘real’ world, energy costs, potholes, traffic jams and all, then consider what kind of change is even desirable, which changes are feasible, and only then consider ‘development’; moving carefully, eyes, ears and collision sensors in alert mode for unexpected events.
Caravan AI可能不会首先寻求设计一种算法,针对最先进的计算平台进行性能优化,然后再将其部署到“真实”世界中。 相反,它可能首先考虑“现实”世界,能源成本,坑洼,交通堵塞以及所有因素,然后考虑什至需要进行什么样的改变,哪些改变是可行的,然后才考虑“发展”。 在警报模式下小心移动眼睛,耳朵和碰撞传感器以防意外事件。
There is no need to overstress the details of the caravan or any other metaphor for that matter. When Turing reflected on the possibility of synthetic intelligence, he proposed the idea developing intelligence from a “smaller mechanism”, a child machine that could eventually become intelligent after a teaching process. Turing was not worried about the details of child rearing or temper tantrums. Turing’s learning machines idea got the basics right. As unusual as that idea might have sounded in 1950, child machines set the stage for the concept of today’s machine-learning industrial complex.
无需过多强调大篷车的细节或任何其他隐喻。 当图灵(Turing)反思综合智能的可能性时,他提出了从“较小的机制”开发智能的想法,这是一种儿童机器 ,在教学过程中最终会变得智能。 图灵并不担心孩子抚养或发脾气的细节。 图灵的学习机思想使基础知识正确。 1950年,这种想法听起来可能与众不同,但儿童机器为当今机器学习工业园区的概念奠定了基础。
It would be foolish to claim that Caravan AI — or any other alternate imaginary — could magically solve problems produced in the wake of Rocket AI, including flawed predictive policing, ugly AI nationalism, and digital monopolies, biased models and technological solutionism, but that is not the point here. Before we can productively address all the ills of Rocket AI, we need a new leitmotif that points AI in a better direction.
宣称Caravan AI或任何其他替代性的想象力可以神奇地解决Rocket AI之后产生的问题,这是愚蠢的,包括有缺陷的预测性警务 ,丑陋的AI民族主义 ,数字垄断, 偏见的模型和技术解决方案 ,但这就是这里不是重点。 在我们能够有效解决火箭AI的所有弊端之前,我们需要一个新的主题,将AI指向更好的方向。
You have to start somewhere. So here is an attempt: Catch & Release.
你必须从某个地方开始。 因此,这是一个尝试: 捕获并释放 。
More on that project later.
稍后会更多关于该项目的信息。
1 Alfred Whitehead. Mathematics in the History of Thought, 1957.
1 阿尔弗雷德·怀特海(Alfred Whitehead)。 思想史中的数学,1957年。
翻译自: https://medium.com/swlh/ai-has-a-rocket-problem-6949c6ed51e8
火箭发射问题