哲学家们喜欢构想思维实验,想象许多疯狂的可能,以期撼动沉睡很久的固见。他们最喜欢思考的对象之一就是你平常对自身的感知。你真的很确定你是你认为的那个自己--一个被锁定在某个肉体里的灵魂?你可能只是一个装在缸子里的大脑,也可能只是超级计算机上一个模拟现实感知的程序。当然,你也可能就是你的配偶,正做着夫妻角色转换的梦。你如何分辨?

 

Philosophers love to invent thought experiments, imagining mad situations that shake up slumbering dogmas. One of their favorite targets is your everyday sense of identity. Are you sure that you are what you think you know that you are—a mind firmly attached to a specific, physical human body? Maybe you’re really a brain in a vat or a program running in some vast computer, mistaking a simulation for reality. Or maybe you’re actually your spouse, having a dream of role reversal. How would you know?

去年12月,这些问题变成了现实。我体验了一段难忘的同时身处两地的经历。非常可能,你会有机会体验同样的经历,并且经常体验到。这种随时随地的灵魂出窍体验不需要神秘的修炼、灵丹妙药或者精神异常。这是一个即将到来的现实的技术。

Those issues came to life for me last December, when I had an unforgettable experience of being in two places at once. So will you, very likely—soon, and then often. Routine out-of-body experience doesn’t require esoteric spiritual discipline, drugs or psychosis. It is a coming, practical technology.

故事是这样:我曾向去年的《诺贝尔对话周》的组织者非常真诚地道歉,告诉他们由于日程冲突我无法到会。这个高端科学会议在瑞典召开,为时一天,由诺贝尔基金会的公共媒体部组织。本次对话的主题是《智能的未来》,一个我长期关心的问题。

My story: I had sent effusive, genuine regrets to the organizers of last year’s Nobel Week Dialogue in Sweden, a day-long, high-level science conference run by the Nobel Foundation’s media arm, saying I couldn’t attend due to scheduling conflicts. The dialogue’s theme was “The Future of Intelligence,” a long-term obsession of mine, and it looked to be a grand event.

哥德堡会议的组织者没有放弃我,而是给了我一个非常有趣的选择:我可以用一种崭新的方式参会,不用离开在麻省剑桥的家。他们让我利用一个叫Beam Pro的软件平台,从我的电脑远程操控一个成人大小、但还不完全像人的机器人去代替我开会。这个机器人可以实时传播视频和声音信号,参会者可以看到和听到我。它也支持键盘输入的文字。利用机器人的探头,我同时也可以看到和听到参会者。出于我的天性,我立刻同意了。

The organizers in Gothenburg came back with “an interesting opportunity”: I could participate in a new way, without leaving my home in Cambridge, Mass., by using the BeamPro platform. From my desktop, I would control a large robot—roughly humansized, though not humanoid. The robot would display live video and audio feeds, so people could see and hear me. It would also support typed messages. I too would be able to see and hear, using sensors attached to the robot and sharing its perspective. Naturally, I jumped at the chance.

最初的几次尝试有点磕磕碰碰。我通过机器人的上屏幕看往哪走,然后通过它的下屏幕观察障碍物,很缓慢地一点一点往前挪。漂洗、打肥皂、再漂洗,我像洗衣服一样不停重复。在这个阶段,我脑子非常清晰地意识到,我是坐在剑桥家里的计算机前操控哥德堡的一个机器。

My first voyages were tentative. I looked at the robot’s upper screen to decide where to go. Then I looked at the lower screen to check for obstacles, swiveled and slowly inched forward. Rinse, lather, repeat. At this stage, I was very aware that I was sitting at home, at a terminal in Cambridge, operating a machine in Gothenburg.

但是仅仅几分钟后,我已经非常自信,操控变得很流畅。很快,我可以毫不费力地边观察边快速移动。我能够将注意力集中到远处的环境,看到周围的环境并听到声音。我来到了哥德堡。

But after just a few minutes, I gained confidence. The process became fluid. Soon I navigated effortlessly and moved quickly. I could focus on the remote environment, taking in its sights and sounds. I was there.

早到者,几个从马来西亚来的学生,进入了讨论区。我慢步上前介绍了自己。对话一开始有些尴尬。在通常的对话中,肢体语言能传递很多基本的信息,比如我们在和谁讲话,对话是否被理解了。为弥补这个缺陷,我不得不全神贯注地按照一个固定的程式进行:我转向我想说话的人,做某种眼神接触(如果必要,稍微动一动,以引起对方的注意),然后键入“你能听见我吗?” 但一旦对话深入进行,这种奇怪的对话感觉很快就消失了,我们开始真实地进行着思想的交流。我们几个人一起漫步了一会,交流最后以疯狂的自拍结束。

A few early arrivers—a group of students visiting from Malaysia—entered the discussion area. I strolled up and introduced myself. The conversation began awkwardly. Usually, body language conveys lots of basic information, such as whom we’re addressing and whether a message has been understood. At first, I had to attend consciously to a checklist: I’d turn toward the person I wanted to address, somehow make eye contact (doing a little jig, if necessary, to get their attention) and type out, “Am I loud enough?” But in sustained conversation, the strangeness of the situation quickly faded, and we got to a meeting of minds. Several of us went for a stroll together, followed by an orgy of selfies.

会议在旁边的礼堂正式开始了。我被安排了一个戏剧性的出场方式。“ 我 ” 在黯淡和混乱的后台等待(后台总是这样)。该 “ 我 ” 出场了。“ 我 ” 通过了一个长而窄的通道,两边是炫目的灯光。为了戏剧效果,“ 我 ” 移动得很快。我有一种奇怪而令人兴奋的感觉,我好像活在一个电子游戏里。“我”来到了台上;观众们得以亲眼目睹了一点未来:未来的机器人,未来的通讯和未来的现实。

Then, in an adjoining auditorium, the session proper began. I was scheduled to make a surprise appearance. It was dark backstage and (as is the way of these things) chaotic. On cue, I entered through a long narrow runway, demarcated by dozens of dazzling lights, moving at a good clip for dramatic effect. I had the uncanny but exhilarating feeling that I was living inside a videogame. I made it onstage, and the audience got a glimpse of the future of robotics, communication and reality.

我看到了未来,它已经近在眼前。只要有更强大的传感器和更逼真的机械动作模拟,这种灵魂出窍的感受就会更真实。很容易设想一个美妙而动人的可能:不出家门,在任何时候身临其境地游览任何地方。脆弱的人体不适合深太空环境,但我们的灵魂将可以去尽情体验。

I have seen the future, and it almost works. With more powerful sensors and actuators, out-of-body experiences will become even more compelling. It is easy to imagine brilliantly attractive possibilities: immersive tourism to anywhere, anytime, without needing to leave home. Fragile human bodies are ill-suited to deep-space environments, but human minds will experience them richly.

我们需要重新思考如何回答这个问题, “我在哪?”,然后,不可避免地沉思: “我是什么?”。

We’ll need to rethink how we answer the question “Where am I?”—and then, inevitably, “What am I?” .

发表回复

后才能评论