我们俄罗斯的Yuri说明明我先认识的维克多

《神曲》里有这么一句话——“唯爱也,移太阳而动群星。”作为诗人但丁从人间到地狱,到炼狱,到天堂这一路旅程的结束语。 ——题记 今年的冬天我爱上了一部名叫《冰上的尤里》的动画片。起初是被安利的结果,开始也是抱着娱乐的心态看的一部动画片,最后看着主人公不断不断地努力的样子,竟然让我就久违地看到了已经被遗忘的、又或是正在经历同样人生阶段的自己,感受到了许许多多从前未曾感受过的“爱”。 这里的“爱”就像《冰上的尤里》所说的一样,“我的‘爱’并不是那种浅显易懂的爱意或恋爱,而是指一个人与朋友,家人还有爱人之间那种微妙的羁绊,可以感受到这份情感,第一次想要主动去维系这份情感,这样的一种莫名的心情,就姑且称之为‘爱’吧。” 这部作品的立足点是非常有趣的,主角胜生勇利(YURI KATSUKI)是一个世界级花样滑冰选手,第一次进入世界级花滑决赛,但却以最后一名惨败而归。在动画的一开始观众就会以上帝视角看到他对“花滑冠军”维克多的憧憬,好不容易进入GPF(大奖赛决赛)与维克多同场竞技,却又因为做到的比全部跳跃no miss更难得做到的全部跳跃都miss掉而败北(哈哈),然后一口气输掉了整个赛季的比赛,在东京颓废了四个月后,准备回老家做待业青年。观众基本可以在第一集就可以确定,勇利是个技术过硬但心里素质奇差的家伙。简直很难相信这货竟然是日本的王牌,这部运动番的男主,简直画风清奇。 但是,正是勇利这种不善于在人生中决胜负的特质,又让许许多多和勇利一样的我们看到了自己的影子——一个人独自在他乡奋斗,孤独,寂寞,失望,恐惧……渐渐看不到最初的梦想,慢慢变得连自己最初的样子都不曾记得。所以基本来说这部作品的主旨虽然说是“爱”,但不是以一种很幼稚或者很中二的形式呈现出来的,而是由一个又一个微小而动人的细节感受到主角内心的成长。 刚才我们说到了主角回到了老家,宁静的海滨温泉小镇长谷川,就顺便介绍一下主角的家庭——小康家庭,父母健在,长姐能干,而且都非常支持他的花滑事业又不会给他太大压力,这样的他怎么会是那样小心翼翼活着的人呢?因为,他少年时期便迷上了花滑,生活里除了花滑,还是花滑,然后就是与花滑有关一切的人和事,对花滑以外的事一无所知,以至于连少年时的初恋结婚了这件事都不知道。而家里人呢,其实也不是很懂勇利到底在搞什么,甚至连花滑比赛的规则都不是很懂。回乡后,勇利安排好了一切,决定去去见初恋的小姐姐优子,然后决定用默默模仿偶像“花滑冠军”维克多自由滑节目全套内容撩优子小姐姐,然而“恨不相逢未嫁时”,被小姐姐的三胞胎女儿当场拍下全部内容上传油管(YouTupe优酷),阴差阳错撩到了偶像维克多。到这里我们伟大的“living legend”(连霸5届大奖赛冠军)维克多终于闪亮登场了。 维克多是怎样一个活着的传奇呢?无非是少年成名连夺金,英俊潇洒又多情,一身名牌Burberry,箱包还用LV的标准男主配置。这么完美的人物设定我们知道他一定是有缺爱,等待被温暖和攻略的特质的呀,不然这么多年少女漫我就白看了呀。然而维克多差点就颠覆我的少女漫常识了,因为维克多是一个战斗民族,而且是一个有套路的战斗民族,如果你没有往下看,也许维克多就停留在一个闪亮完美的万人迷上罢了。第二集有一个细节,把维克多从小教到大教练雅科夫答记者问时怎么说?他说,他这么自私的人怎么做教练?一开始我们以为雅科夫是黑他,但看到后来我发现雅科夫真的是一个耿直boy。又有这么一个细节,维克多这么一个名族英雄(不是我说的,动画里讲的,原文“我们俄罗斯的英雄维克多”),却除了教练,在他宣布要退出时俄罗斯国家队没有人挽留他。我们可以去猜,他的家人呢,他的朋友呢,他的队友呢?都没有,后来证明怀念他的只有他的对手和勇利(Yuri)小天使。这就非常有意思了,他这样一个外形与内涵兼具的男人,为什么连个送他的女朋友也木有?!后来第十话他自己也说,我是一个二十多年将“life”和“love”弃之不顾,专心将自己献给花滑事业的人。 你说什么样的人会这么“轴”到专心忙事业,一直没有生活,没有爱呢?一定是非常“变态的”爱着花滑的人,如果大家注意到第一集一个细节,你就可以知道维克多为什么要退役了,他的大奖赛决赛的总分是335.76分,而第二名是301.46分,第三名是288.59分。他的得分比第二名高11.3%,就竞技体育而言,简而言之压倒性差距。所谓高者寂寞,独孤求败。这时当他看到了勇利对他335.76分的这一套自由滑的完全模仿版,他感觉到了一种欣喜,这和他看到另一个明日之星Yuri(尤里·普利赛提)和自己少年时一样挑战后内四周跳的惊喜是同一种惊喜。这里我们终于引入了另一个Yuri(勇利的英文拼写也是Yuri),人设基本是俄罗斯·冰上的妖精·15岁青少年大奖赛决赛冠军·金发碧眼小正太一枚,这样的人设简直就是维克多的翻版好么,连教练都是一个教练。然而,15岁小正太自认是冰上的老虎,毕竟还是嫩,从他身上维克多看不到迅速养成BOSS,然后推BOSS,吸完经验值完成自我升级大圆满的可能性,所以准备留着以后慢慢养成,推完勇利后回来再来推Yuri。于是火速飞去日本,养成心理素质奇差技术一级棒的勇利去了。 勇利自然对花滑运动的“living legend”金光闪闪的偶像大人维克多无上欢迎,在维克多“说往东绝不向西,说减肥绝不吃鸡”成就达成后,我们俄罗斯的妖精就火速赶到了,我们俄罗斯的Yuri说明明我先认识的维克多,说好先帮我编套路赢比赛的,怎么就被个日本的小妖精撩走了呢?既然大家都叫Yuri,那就只能battle了。只有赢的人才能得到维克多的海量经验值的战斗效果加成。 但是维克多想,我经验值这么高,奶一个是奶,奶两个也是奶,不如就两个一起上,先奶奶看那个经验值长得快吧!于是一个没什么恋爱经验(但套路很深的)维克多老师决定用爱的教育来给两位学生编套路,决定了,日本的勇利(因为太老实,心理素质又差)就用“爱即eros”套路,男欢女爱的爱;俄罗斯的Yuri(因为喜欢性感的豹纹和老虎纹)就用“爱即agape”套路,无偿的爱。咦,你说你们不喜欢,我维克多老师喜欢就好啦! 到这里三位主角算是介绍完毕了,维克多,勇利和Yuri。很多人觉得勇利和Yuri如同镜面的两面,勇利是感性的,富于表现力的;而Yuri则是理性的,长于技艺的。那么维克多呢,维克多是兼具表现力和技艺的吗?在我看来,答案是NO,中国大赛时是维克多的教练雅科夫说波波维奇的自由滑的艺术内容是维克多所不具备的,波波维奇的自由滑的内容主要讲的就是一个男人决定原谅出轨的女友,让她从睡美人的状态中解放出来,给她自由,说到底就是爱与救赎。雅科夫说维克多这个人很糟糕(中国大赛),很自私(第二话),难道各位就不好奇维克多究竟做了什么事,让雅科夫觉得维克多很糟糕,很自私呢?我们可以大胆的猜想一下,有几个细节,一个是第一话里勇利的芭蕾舞老师在观看维克多在代代木世界竞标赛时自由滑表演时讲,这种挽留的动作这种应该更纯情一点小男生来做才会更感人(维克多这种帅哥怎么可能被人甩);还有一个是第三话里在维克多教Yuri“爱即agape”时说维克多你做动作时,你明明一直也是跳的自信满满嘛,怎么我跳的自信满满就不对了,维克多说这种感觉的事情怎么说的清;再有一个就是维克多曾经向勇利提过,我谈过恋爱,第一个恋人啊……然后被勇利打断了,前文提过维克多第十话时讲我已经把“life”和“love”弃置20多年不顾了。由以上几个细节我们可以猜想,维克多一个满脑子花滑的人为什么会去谈恋爱呢?难道是花滑遇到瓶颈了,维克多虽然少年成名但是也不是一直都这么成功的,否则以维克多27岁的高龄那就应该是连霸奖赛决赛十几届了,有这么一种可能,维克多在少年成名后有那么一段时间,遇到演技上的瓶颈了,这个就像我们搞投资时,在严谨的技术假设基础上,无限次的演算可以接近完全拟合实际情况,就算不去恋爱以维克多的艺术造诣也可以去无限接近,甚至是无限拟合一种几乎看上去就是爱的演技。但是这个情形有一个前提,一是假设完全符合实际情况,二是演算无限次。我相信以维克多的勤奋,无限次的练习不是问题,但是假设毕竟是假设,我们去设计假设本身就是为了拟合理想的状况,方便演算,本来假设没爱过却能表现出爱这件事就已经是脱离现实了,那么问题来了,将自己完全奉献给花滑的维克多会不会为了有用更真实的演技而选择真刀真枪爱一场,来提升自己的艺术造诣呢?应该是有的,所以那样金光闪闪的维克多一定被人深爱过,甚至被深爱他的人挽留过,但是他只爱他自己,只爱他的花滑,当雅科夫发现了这件事之后才会认定维克多是一个完全自私的人。而且维克多的声优选择声音很色气的诹访部顺一大大,应该本身就设定为有一定感情经历的成年男人。当他看到了勇利表演的《不要离开我身边》之后,相信他一定是注意到自己的挽留好似形而上学,而勇利的挽留如此的真实而打动人。遇见勇利前维克多本身的特点就是金光闪闪的万人迷,他应该是很清楚自己会轻易地被别人爱上,很轻易的感受到自己是被爱的这件事。而一直自认为失败的自卑的勇利,一直是勇敢地爱着包括维克多在内的很多人,很轻易的感受到自己去爱别人的感受,而第五话的时候,勇利说从前他总是觉得自己很孤独,一个人在奋斗,很难感受到自己是被爱的这件事,而遇到维克多之后,他感受到来自很多来自家人,朋友,甚至来自于维克多的种种微妙的情感,姑且将其称之为爱。我们可以大致的了解到勇利后来有了感知被爱的能力了。而后来的维克多也会因为勇利的感受去安慰,去拥抱,去挽留勇利,甚至去鼓励其他除了勇利以外的选手(比如说克里斯·贾科梅蒂),值得对比的是第六话中国大赛克里斯回忆第一次克里斯遇到维克多的时候,当时维克多的反应很官方,宛若完美的神像的高高在上,当时基本就是一种套路的表达了,而第十二话的鼓励则是出自于真心的欣赏和鼓励,可以说是一种向朋友表达爱的方式。所以,遇见了勇利的维克多也拥有了爱人的能力。而另一位主角俄罗斯的Yuri则是一种很懵懂的状态,遇见勇利之前他甚至不会表达对勇利的欣赏,第一话的时候Yuri和勇利开始的时候,明明很欣赏勇利,却用那么糟糕的方式去鼓励勇利,看起来就像在故意欺负勇利一样,勇利甚至以为Yuri是俄罗斯的不良少年呢(哈哈),后来Yuri通过维克多的“爱即agape”特训明白了爷爷对自己的爱,即明白自己被爷爷无私的爱着,至此有了感知被爱的能力,然后慢慢学会了去爱别人,比如说第九话的俄罗斯大赛时候,教练维克多不在勇利身边,Yuri特地为了给勇利过生日,送猪排饭皮罗斯基给勇利吃,并关心勇利,问他为什么不去竭尽全力?最后的最后,Yuri也会为了将勇利留在冰场上而努力比赛,爷爷说尤拉奇卡,变强了呢。其实也是在说Yuri因为知道了爱而变强了。不然以他原来的方式,大概基本无非再去嘲讽勇利一次“冰场上不需要两个Yuri,快点引退吧!笨蛋!”而已。 前几天看到有一位大大说他们的名字其实很有趣的,维克托(Victor)的意思是胜利,勇利全名是胜生勇利(Yuri KATSUKI),一头一尾即是胜利,而尤里·普利赛提(Yuri PLISETSKY),又与勇利同名,有意思的是西方的基督教教义中神是三位一体的,而这部动画片的爱也是从三位主角上慢慢的体现出来的,三个看似独立的点形成了彼此的联系,在几何上,三点可以确定一个圆,而圆又是一个自成格局的小世界。这个故事也是很圆满幸福的结局,故事的最后勇利和维克托Yuri三人相遇在维克托当初离开的桥上,桥这个意像其实很圆满,最终归来的勇利和最初离开的维克托方向相向而来,就好像他们绕过的地球的一周,走过了很多地方又回到了原点,看似全无用处,甚至连维克托,勇利和Yuri当初的身份都未曾变化,但他们都拥有了“life”、“love”以及梦想,到达去理想之地的勇气和伙伴,我们所称之为爱的,冰上的全部。 写在后面的话: Love makes us strong——爱让我们坚强,真的非常感谢在有生之年,在这个让我无比沮丧的冬天,让我在还有精力去做一些努力的年纪,遇到Yuri on ice(冰上的尤里),为自己的亲人,朋友和最自豪的金融从业生涯献上我全部的爱和热情。We are born to make history. 祝东风 2016年12月31日 笔

The Atlantic · by Nicholas Carr

"Dave, stop. Stop, will you? Stop, Dave. Will you stop, Dave?” So the supercomputer HAL pleads with the implacable astronaut Dave Bowman in a famous and weirdly poignant scene toward the end of Stanley Kubrick’s 2001: A Space Odyssey. Bowman, having nearly been sent to a deep-space death by the malfunctioning machine, is calmly, coldly disconnecting the memory circuits that control its artificial “ brain. “Dave, my mind is going,” HAL says, forlornly. “I can feel it. I can feel it.”

I can feel it, too. Over the past few years I’ve had an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory. My mind isn’t going—so far as I can tell—but it’s changing. I’m not thinking the way I used to think. I can feel it most strongly when I’m reading. Immersing myself in a book or a lengthy article used to be easy. My mind would get caught up in the narrative or the turns of the argument, and I’d spend hours strolling through long stretches of prose. That’s rarely the case anymore. Now my concentration often starts to drift after two or three pages. I get fidgety, lose the thread, begin looking for something else to do. I feel as if I’m always dragging my wayward brain back to the text. The deep reading that used to come naturally has become a struggle.

I think I know what’s going on. For more than a decade now, I’ve been spending a lot of time online, searching and surfing and sometimes adding to the great databases of the Internet. The Web has been a godsend to me as a writer. Research that once required days in the stacks or periodical rooms of libraries can now be done in minutes. A few Google searches, some quick clicks on hyperlinks, and I’ve got the telltale fact or pithy quote I was after. Even when I’m not working, I’m as likely as not to be foraging in the Web’s info-thickets’reading and writing e-mails, scanning headlines and blog posts, watching videos and listening to podcasts, or just tripping from link to link to link. (Unlike footnotes, to which they’re sometimes likened, hyperlinks don’t merely point to related works; they propel you toward them.)

For me, as for others, the Net is becoming a universal medium, the conduit for most of the information that flows through my eyes and ears and into my mind. The advantages of having immediate access to such an incredibly rich store of information are many, and they’ve been widely described and duly applauded. “The perfect recall of silicon memory,” Wired’s Clive Thompson has written, “can be an enormous boon to thinking.” But that boon comes at a price. As the media theorist Marshall McLuhan pointed out in the 1960s, media are not just passive channels of information. They supply the stuff of thought, but they also shape the process of thought. And what the Net seems to be doing is chipping away my capacity for concentration and contemplation. My mind now expects to take in information the way the Net distributes it: in a swiftly moving stream of particles. Once I was a scuba diver in the sea of words. Now I zip along the surface like a guy on a Jet Ski.

I’m not the only one. When I mention my troubles with reading to friends and acquaintances—literary types, most of them—many say they’re having similar experiences. The more they use the Web, the more they have to fight to stay focused on long pieces of writing. Some of the bloggers I follow have also begun mentioning the phenomenon. Scott Karp, who writes a blog about online media, recently confessed that he has stopped reading books altogether. “I was a lit major in college, and used to be [a] voracious book reader,” he wrote. “What happened?” He speculates on the answer: “What if I do all my reading on the web not so much because the way I read has changed, i.e. I’m just seeking convenience, but because the way I THINK has changed?”

Bruce Friedman, who blogs regularly about the use of computers in medicine, also has described how the Internet has altered his mental habits. “I now have almost totally lost the ability to read and absorb a longish article on the web or in print,” he wrote earlier this year. A pathologist who has long been on the faculty of the University of Michigan Medical School, Friedman elaborated on his comment in a telephone conversation with me. His thinking, he said, has taken on a “staccato” quality, reflecting the way he quickly scans short passages of text from many sources online. “I can’t read War and Peace anymore,” he admitted. “I’ve lost the ability to do that. Even a blog post of more than three or four paragraphs is too much to absorb. I skim it.”

Anecdotes alone don’t prove much. And we still await the long-term neurological and psychological experiments that will provide a definitive picture of how Internet use affects cognition. But a recently published study of online research habits , conducted by scholars from University College London, suggests that we may well be in the midst of a sea change in the way we read and think. As part of the five-year research program, the scholars examined computer logs documenting the behavior of visitors to two popular research sites, one operated by the British Library and one by a U.K. educational consortium, that provide access to journal articles, e-books, and other sources of written information. They found that people using the sites exhibited “a form of skimming activity,” hopping from one source to another and rarely returning to any source they’d already visited. They typically read no more than one or two pages of an article or book before they would “bounce” out to another site. Sometimes they’d save a long article, but there’s no evidence that they ever went back and actually read it. The authors of the study report:

It is clear that users are not reading online in the traditional sense; indeed there are signs that new forms of “reading” are emerging as users “power browse” horizontally through titles, contents pages and abstracts going for quick wins. It almost seems that they go online to avoid reading in the traditional sense.
Thanks to the ubiquity of text on the Internet, not to mention the popularity of text-messaging on cell phones, we may well be reading more today than we did in the 1970s or 1980s, when television was our medium of choice. But it’s a different kind of reading, and behind it lies a different kind of thinking—perhaps even a new sense of the self. “We are not only what we read,” says Maryanne Wolf, a developmental psychologist at Tufts University and the author of Proust and the Squid: The Story and Science of the Reading Brain. “We are how we read.” Wolf worries that the style of reading promoted by the Net, a style that puts “efficiency” and “immediacy” above all else, may be weakening our capacity for the kind of deep reading that emerged when an earlier technology, the printing press, made long and complex works of prose commonplace. When we read online, she says, we tend to become “mere decoders of information.” Our ability to interpret text, to make the rich mental connections that form when we read deeply and without distraction, remains largely disengaged.

Reading, explains Wolf, is not an instinctive skill for human beings. It’s not etched into our genes the way speech is. We have to teach our minds how to translate the symbolic characters we see into the language we understand. And the media or other technologies we use in learning and practicing the craft of reading play an important part in shaping the neural circuits inside our brains. Experiments demonstrate that readers of ideograms, such as the Chinese, develop a mental circuitry for reading that is very different from the circuitry found in those of us whose written language employs an alphabet. The variations extend across many regions of the brain, including those that govern such essential cognitive functions as memory and the interpretation of visual and auditory stimuli. We can expect as well that the circuits woven by our use of the Net will be different from those woven by our reading of books and other printed works.

Sometime in 1882, Friedrich Nietzsche bought a typewriter—a Malling-Hansen Writing Ball, to be precise. His vision was failing, and keeping his eyes focused on a page had become exhausting and painful, often bringing on crushing headaches. He had been forced to curtail his writing, and he feared that he would soon have to give it up. The typewriter rescued him, at least for a time. Once he had mastered touch-typing, he was able to write with his eyes closed, using only the tips of his fingers. Words could once again flow from his mind to the page.

But the machine had a subtler effect on his work. One of Nietzsche’s friends, a composer, noticed a change in the style of his writing. His already terse prose had become even tighter, more telegraphic. “Perhaps you will through this instrument even take to a new idiom,” the friend wrote in a letter, noting that, in his own work, his “‘thoughts’ in music and language often depend on the quality of pen and paper.”

Living With a Computer (July 1982)
"The process works this way. When I sit down to write a letter or start the first draft of an article, I simply type on the keyboard and the words appear on the screen..." By James Fallows

“You are right,” Nietzsche replied, “our writing equipment takes part in the forming of our thoughts.” Under the sway of the machine, writes the German media scholar Friedrich A. Kittler , Nietzsche’s prose “changed from arguments to aphorisms, from thoughts to puns, from rhetoric to telegram style.”

The human brain is almost infinitely malleable. People used to think that our mental meshwork, the dense connections formed among the 100 billion or so neurons inside our skulls, was largely fixed by the time we reached adulthood. But brain researchers have discovered that that’s not the case. James Olds, a professor of neuroscience who directs the Krasnow Institute for Advanced Study at George Mason University, says that even the adult mind “is very plastic.” Nerve cells routinely break old connections and form new ones. “The brain,” according to Olds, “has the ability to reprogram itself on the fly, altering the way it functions.”

As we use what the sociologist Daniel Bell has called our “intellectual technologies”—the tools that extend our mental rather than our physical capacities—we inevitably begin to take on the qualities of those technologies. The mechanical clock, which came into common use in the 14th century, provides a compelling example. In Technics and Civilization, the historian and cultural critic Lewis Mumford described how the clock “disassociated time from human events and helped create the belief in an independent world of mathematically measurable sequences.” The “abstract framework of divided time” became “the point of reference for both action and thought.”

The clock’s methodical ticking helped bring into being the scientific mind and the scientific man. But it also took something away. As the late MIT computer scientist Joseph Weizenbaum observed in his 1976 book, Computer Power and Human Reason: From Judgment to Calculation, the conception of the world that emerged from the widespread use of timekeeping instruments “remains an impoverished version of the older one, for it rests on a rejection of those direct experiences that formed the basis for, and indeed constituted, the old reality.” In deciding when to eat, to work, to sleep, to rise, we stopped listening to our senses and started obeying the clock.

The process of adapting to new intellectual technologies is reflected in the changing metaphors we use to explain ourselves to ourselves. When the mechanical clock arrived, people began thinking of their brains as operating “like clockwork.” Today, in the age of software, we have come to think of them as operating “like computers.” But the changes, neuroscience tells us, go much deeper than metaphor. Thanks to our brain’s plasticity, the adaptation occurs also at a biological level.

The Internet promises to have particularly far-reaching effects on cognition. In a paper published in 1936, the British mathematician Alan Turing proved that a digital computer, which at the time existed only as a theoretical machine, could be programmed to perform the function of any other information-processing device. And that’s what we’re seeing today. The Internet, an immeasurably powerful computing system, is subsuming most of our other intellectual technologies. It’s becoming our map and our clock, our printing press and our typewriter, our calculator and our telephone, and our radio and TV.

When the Net absorbs a medium, that medium is re-created in the Net’s image. It injects the medium’s content with hyperlinks, blinking ads, and other digital gewgaws, and it surrounds the content with the content of all the other media it has absorbed. A new e-mail message, for instance, may announce its arrival as we’re glancing over the latest headlines at a newspaper’s site. The result is to scatter our attention and diffuse our concentration.

The Net’s influence doesn’t end at the edges of a computer screen, either. As people’s minds become attuned to the crazy quilt of Internet media, traditional media have to adapt to the audience’s new expectations. Television programs add text crawls and pop-up ads, and magazines and newspapers shorten their articles, introduce capsule summaries, and crowd their pages with easy-to-browse info-snippets. When, in March of this year, TheNew York Times decided to devote the second and third pages of every edition to article abstracts , its design director, Tom Bodkin, explained that the “shortcuts” would give harried readers a quick “taste” of the day’s news, sparing them the “less efficient” method of actually turning the pages and reading the articles. Old media have little choice but to play by the new-media rules.

Never has a communications system played so many roles in our lives—or exerted such broad influence over our thoughts—as the Internet does today. Yet, for all that’s been written about the Net, there’s been little consideration of how, exactly, it’s reprogramming us. The Net’s intellectual ethic remains obscure.

About the same time that Nietzsche started using his typewriter, an earnest young man named Frederick Winslow Taylor carried a stopwatch into the Midvale Steel plant in Philadelphia and began a historic series of experiments aimed at improving the efficiency of the plant’s machinists. With the approval of Midvale’s owners, he recruited a group of factory hands, set them to work on various metalworking machines, and recorded and timed their every movement as well as the operations of the machines. By breaking down every job into a sequence of small, discrete steps and then testing different ways of performing each one, Taylor created a set of precise instructions—an “algorithm,” we might say today—for how each worker should work. Midvale’s employees grumbled about the strict new regime, claiming that it turned them into little more than automatons, but the factory’s productivity soared.

More than a hundred years after the invention of the steam engine, the Industrial Revolution had at last found its philosophy and its philosopher. Taylor’s tight industrial choreography—his “system,” as he liked to call it—was embraced by manufacturers throughout the country and, in time, around the world. Seeking maximum speed, maximum efficiency, and maximum output, factory owners used time-and-motion studies to organize their work and configure the jobs of their workers. The goal, as Taylor defined it in his celebrated 1911 treatise, The Principles of Scientific Management, was to identify and adopt, for every job, the “one best method” of work and thereby to effect “the gradual substitution of science for rule of thumb throughout the mechanic arts.” Once his system was applied to all acts of manual labor, Taylor assured his followers, it would bring about a restructuring not only of industry but of society, creating a utopia of perfect efficiency. “In the past the man has been first,” he declared; “in the future the system must be first.”

Taylor’s system is still very much with us; it remains the ethic of industrial manufacturing. And now, thanks to the growing power that computer engineers and software coders wield over our intellectual lives, Taylor’s ethic is beginning to govern the realm of the mind as well. The Internet is a machine designed for the efficient and automated collection, transmission, and manipulation of information, and its legions of programmers are intent on finding the “one best method”—the perfect algorithm—to carry out every mental movement of what we’ve come to describe as “knowledge work.”

Google’s headquarters, in Mountain View, California—the Googleplex—is the Internet’s high church, and the religion practiced inside its walls is Taylorism. Google, says its chief executive, Eric Schmidt, is “a company that’s founded around the science of measurement,” and it is striving to “systematize everything” it does. Drawing on the terabytes of behavioral data it collects through its search engine and other sites, it carries out thousands of experiments a day, according to the Harvard Business Review, and it uses the results to refine the algorithms that increasingly control how people find information and extract meaning from it. What Taylor did for the work of the hand, Google is doing for the work of the mind.

The company has declared that its mission is “to organize the world’s information and make it universally accessible and useful.” It seeks to develop “the perfect search engine,” which it defines as something that “understands exactly what you mean and gives you back exactly what you want.” In Google’s view, information is a kind of commodity, a utilitarian resource that can be mined and processed with industrial efficiency. The more pieces of information we can “access” and the faster we can extract their gist, the more productive we become as thinkers.

Where does it end? Sergey Brin and Larry Page, the gifted young men who founded Google while pursuing doctoral degrees in computer science at Stanford, speak frequently of their desire to turn their search engine into an artificial intelligence, a HAL-like machine that might be connected directly to our brains. “The ultimate search engine is something as smart as people—or smarter,” Page said in a speech a few years back. “For us, working on search is a way to work on artificial intelligence.” In a 2004 interview with Newsweek, Brin said, “Certainly if you had all the world’s information directly attached to your brain, or an artificial brain that was smarter than your brain, you’d be better off.” Last year, Page told a convention of scientists that Google is “really trying to build artificial intelligence and to do it on a large scale.”

Such an ambition is a natural one, even an admirable one, for a pair of math whizzes with vast quantities of cash at their disposal and a small army of computer scientists in their employ. A fundamentally scientific enterprise, Google is motivated by a desire to use technology, in Eric Schmidt’s words, “to solve problems that have never been solved before,” and artificial intelligence is the hardest problem out there. Why wouldn’t Brin and Page want to be the ones to crack it?

Still, their easy assumption that we’d all “be better off” if our brains were supplemented, or even replaced, by an artificial intelligence is unsettling. It suggests a belief that intelligence is the output of a mechanical process, a series of discrete steps that can be isolated, measured, and optimized. In Google’s world, the world we enter when we go online, there’s little place for the fuzziness of contemplation. Ambiguity is not an opening for insight but a bug to be fixed. The human brain is just an outdated computer that needs a faster processor and a bigger hard drive.

The idea that our minds should operate as high-speed data-processing machines is not only built into the workings of the Internet, it is the network’s reigning business model as well. The faster we surf across the Web—the more links we click and pages we view—the more opportunities Google and other companies gain to collect information about us and to feed us advertisements. Most of the proprietors of the commercial Internet have a financial stake in collecting the crumbs of data we leave behind as we flit from link to link—the more crumbs, the better. The last thing these companies want is to encourage leisurely reading or slow, concentrated thought. It’s in their economic interest to drive us to distraction.

Maybe I’m just a worrywart. Just as there’s a tendency to glorify technological progress, there’s a countertendency to expect the worst of every new tool or machine. In Plato’s Phaedrus, Socrates bemoaned the development of writing. He feared that, as people came to rely on the written word as a substitute for the knowledge they used to carry inside their heads, they would, in the words of one of the dialogue’s characters, “cease to exercise their memory and become forgetful.” And because they would be able to “receive a quantity of information without proper instruction,” they would “be thought very knowledgeable when they are for the most part quite ignorant.” They would be “filled with the conceit of wisdom instead of real wisdom.” Socrates wasn’t wrong—the new technology did often have the effects he feared—but he was shortsighted. He couldn’t foresee the many ways that writing and reading would serve to spread information, spur fresh ideas, and expand human knowledge (if not wisdom).

The arrival of Gutenberg’s printing press, in the 15th century, set off another round of teeth gnashing. The Italian humanist Hieronimo Squarciafico worried that the easy availability of books would lead to intellectual laziness, making men “less studious” and weakening their minds. Others argued that cheaply printed books and broadsheets would undermine religious authority, demean the work of scholars and scribes, and spread sedition and debauchery. As New York University professor Clay Shirky notes, “Most of the arguments made against the printing press were correct, even prescient.” But, again, the doomsayers were unable to imagine the myriad blessings that the printed word would deliver.

So, yes, you should be skeptical of my skepticism. Perhaps those who dismiss critics of the Internet as Luddites or nostalgists will be proved correct, and from our hyperactive, data-stoked minds will spring a golden age of intellectual discovery and universal wisdom. Then again, the Net isn’t the alphabet, and although it may replace the printing press, it produces something altogether different. The kind of deep reading that a sequence of printed pages promotes is valuable not just for the knowledge we acquire from the author’s words but for the intellectual vibrations those words set off within our own minds. In the quiet spaces opened up by the sustained, undistracted reading of a book, or by any other act of contemplation, for that matter, we make our own associations, draw our own inferences and analogies, foster our own ideas. Deep reading, as Maryanne Wolf argues, is indistinguishable from deep thinking.

If we lose those quiet spaces, or fill them up with “content,” we will sacrifice something important not only in our selves but in our culture. In a recent essay, the playwright Richard Foreman eloquently described what’s at stake:

I come from a tradition of Western culture, in which the ideal (my ideal) was the complex, dense and “cathedral-like” structure of the highly educated and articulate personality—a man or woman who carried inside themselves a personally constructed and unique version of the entire heritage of the West. [But now] I see within us all (myself included) the replacement of complex inner density with a new kind of self—evolving under the pressure of information overload and the technology of the “instantly available.”
As we are drained of our “inner repertory of dense cultural inheritance,” Foreman concluded, we risk turning into “‘pancake people’—spread wide and thin as we connect with that vast network of information accessed by the mere touch of a button.”

I’m haunted by that scene in 2001. What makes it so poignant, and so weird, is the computer’s emotional response to the disassembly of its mind: its despair as one circuit after another goes dark, its childlike pleading with the astronaut—“I can feel it. I can feel it. I’m afraid”—and its final reversion to what can only be called a state of innocence. HAL’s outpouring of feeling contrasts with the emotionlessness that characterizes the human figures in the film, who go about their business with an almost robotic efficiency. Their thoughts and actions feel scripted, as if they’re following the steps of an algorithm. In the world of 2001, people have become so machinelike that the most human character turns out to be a machine. That’s the essence of Kubrick’s dark prophecy: as we come to rely on computers to mediate our understanding of the world, it is our own intelligence that flattens into artificial intelligence.

本文由云顶娱乐官方网站发布于云顶娱乐app,转载请注明出处:我们俄罗斯的Yuri说明明我先认识的维克多

相关阅读