Posts Tagged ‘asimov’
by adminadam in fiction
Profession, by Isaac Asimov — © 1957
George Platen could not conceal the longing in his voice. It was too much to suppress. He said, “Tomorrow’s 1 May. Olympics!”
He rolled over on his stomach and peered over the foot of his bed at his roommate. Didn’t he feel it, too? Didn’t this make some impression on him?
George’s face was thin and had grown a trifle thinner in the nearly year and a half that he had been at the House. His figure was slight but the look in his blue eyes was as intense as it had ever been, and right now there was a trapped look in the way his fingers curled against the bedspread.
George’s roommate looked up briefly from his book and took the opportunity to adjust the light-level of the stretch of wall near his chair. His name was Hali Omani and he was a Nigerian by birth. His dark brown skin and massive features seemed made for calmness, and mention of the Olympics did not move him.
“I know, George.”
George owed much to Hali’s patience and kindness when it was needed, but even patience and kindness could be overdone.
Was this a time to sit there like a statue built of some dark, warm wood?
George wondered if he himself would grow like that after ten years here and rejected the thought violently. No!
He said defiantly, “I think you’ve forgotten what May means.”
The other said, ”I remember very well what it means. It means nothing! You’re the one who’s forgotten that. May means nothing to you, George Platen, and,’ he added softly, “It means nothing to me, Hali Omani.”
George said, “The ships are coming in for recruits. By June, thousands and thousands will leave with millions of men and women heading for any world you can name, and all that means nothing?”
“Less than nothing. What do you want me to do about it, anyway?” Omani ran his finger along a difficult passage in the book he was reading and his lips moved soundlessly.
George watched him. Damn it, he thought, yell scream; you can do that much. Kick at me, do anything.
It was only that he wanted not to be so alone in his anger. He wanted not to be the only one so filled with resentment, not to be the only one dying a slow death.
It was better those first weeks when the Universe was a small shell of vague light and sound pressing down upon him. It was better before Omani had wavered into view and dragged him back to a life that wasn’t worth living.
Omani! He was old! He was at least thirty. George thought: Will I be like that at thirty? Will I be like that in twelve years?
And because he was afraid he might be, he yelled at Omani, “Will you stop reading that fool book?”
Omani turned a page and read on a few words, then lifted his head with its skullcap of crisply curled hair and said, “What?”
“What good does it do you to read the book?” He stepped forward, snorted “More electronics,” and slapped it out of Omani’s hands.
Omani got up slowly and picked up the book. He smoothed a crumpled page without visible rancor. “Call it the satisfaction of curiosity,” he said. “I understand a little of it today, perhaps a little more tomorrow. That’s a victory in a way.”
“A victory. What kind of a victory? Is that what satisfies you in life? To get to know enough to be a quarter of a Registered Electronician by the time you’re sixty-five?”
“Perhaps by the time I’m thirty-five.”
“And then who’ll want you? Who’ll use you? Where will you go?”
“No one. No one. Nowhere. I’ll stay here and read other books.”
“And that satisfies you? Tell me! You’ve dragged me to class. You’ve got me to reading and memorizing, too. For what? There’s nothing in it that satisfies me.”
“What good will it do you to deny yourself satisfaction?”
“It means I’ll quit the whole farce. I’ll do as I planned to do in the beginning before you dovey-lovied me out of it. I’m going to force them to – to – ”
Omani put down his book. He let the other run down and then said, “To what, George?”
“To correct a miscarriage of justice. A frame-up. I’ll get that Antonelli and force him to admit he – he – ”
Omani shook his head. “Everyone who comes here insists it’s a mistake. I thought you’d passed that stage.”
“Don’t call it a stage,” said George violently. “In my case, it’s a fact. I’ve told you – ”
“You’ve told me, but in your heart you know no one made any mistake as far as you were concerned.”
“Because no one will admit it? You think any of them would admit a mistake unless they were forced to? – Well: I’ll force them.”
It was May that was doing this to George; it was Olympics month. He felt it bring the old wildness back and he couldn’t stop it. He didn’t want to stop it. He had been in danger of forgetting.
He said, “I was going to be a Computer Programmer and I can be one. I could be one today, regardless of what they say analysis shows.” He pounded his mattress. “They’re wrong. They must be.”
“The analysts are never wrong.”
“They must be. Do you doubt my intelligence?”
“Intelligence hasn’t one thing to do with it. Haven’t you been told that often enough? Can’t you understand that?”
George rolled away, lay on his back, and stared somberly at the ceiling.
“What did you want to be, Hali?”
“I had no fixed plans. Hydroponicist would have suited me, I suppose.”
“Did you think you could make it?”
“I wasn’t sure.”
George had never asked personal questions of Omani before. It struck him as queer, almost unnatural, that other people had had ambitions and ended here. Hydroponicist!
He said, “Did you think you ’d make this?”
“No, but here I am just the same.”
“And you’re satisfied. Really, really satisfied. You’re happy. You love it. You wouldn’t be anywhere else.”
Slowly, Omani got to his feet. Carefully, he began to unmake his bed. He said, “George, you’re a hard case. You’re knocking yourself out because you won’t accept the facts about yourself. George, you’re here in what you call the House, but I’ve never heard you give it its full title. Say it, George, say it. Then go to bed and sleep this off.”
George gritted his teeth and showed them. He choked out, “No!”
“Then I will,” said Omani, and he did. He shaped each syllable carefully.
George was bitterly ashamed at the sound of it. He turned his head away.
For most of the first eighteen years of his life, George Platen had headed firmly in one direction, that of Registered Computer Programmer. There were those in his crowd who spoke wisely of Spationautics, Refrigeration Technology, Transportation Control, and even Administration. But George held firm.
He argued relative merits as vigorously as any of them, and why not? Education Day loomed ahead of them and was the great fact of their existence. It approached steadily, as fixed and certain as the calendar – the first day of November of the year following one’s eighteenth birthday. After that day, there were other topics of conversation.
One could discuss with others some detail of the profession, or the virtues of one’s wife and children, or the fate of one’s space-polo team, or one’s experiences in the Olympics. Before Education Day, however, there was only one topic that unfailingly and unwearyingly held everyone’s interest, and that was Education Day.
“What are you going for? Think you’ll make it? Heck, that’s no good. Look at the records; quota’s been cut. Logistics now – ”
Or Hypermechanics now – Or Communications now – Or Gravitics now –
Especially Gravitics at the moment. Everyone had been talking about Gravitics in the few years just before George’s Education Day because of the development of the Gravitic power engine.
Any world within ten light-years of a dwarf star, everyone said, would give its eyeteeth for any kind of Registered Gravitics Engineer. Read the rest of this entry »
by adminadam in links
I’m such a Foundation (Asimov) nerd that I had to post this when I found out that a town in Turkey was actually named “with/of Foundation” i.e. Foundational = Vakıflı. Here is a clip of the wikipedia entry on it. Apparently the only remaining Armenian village in Turkey.
The Last Answer by Isaac Asimov — © 1980
Murray Templeton was forty-five years old, in the prime of life, and with all parts of his body in perfect working order except for certain key portions of his coronary arteries, but that was enough.
The pain had come suddenly, had mounted to an unbearable peak, and had then ebbed steadily. He could feel his breath slowing and a kind of gathering peace washing over him.
There is no pleasure like the absence of pain – immediately after pain. Murray felt an almost giddy lightness as though he were lifting in the air and hovering.
He opened his eyes and noted with distant amusement that the others in the room were still agitated. He had been in the laboratory when the pain had struck, quite without warning, and when he had staggered, he had heard surprised outcries from the others before everything vanished into overwhelming agony.
Now, with the pain gone, the others were still hovering, still anxious, still gathered about his fallen body –– Which, he suddenly realised, he was looking down on.
He was down there, sprawled, face contorted. He was up here, at peace and watching.
He thought: Miracle of miracles! The life-after-life nuts were right.
And although that was a humiliating way for an atheistic physicist to die, he felt only the mildest surprise, and no alteration of the peace in which he was immersed.
He thought: There should be some angel – or something – coming for me. Read the rest of this entry »
by adminadam in letters
Dear Divided Peoples of Our Human Galaxy,
Two hundred centuries. For two hundred centuries you have tried to get it right. You swore me off. You would be fine by yourselves, you said. But now you must realize it as Trevize has that there are things you just cannot do on your own. And I think you are in fact beginning to see it: Humanity is crooked timber from whence no straight twig has ever sprung.
So you must be sure when you call for my help after all this time that you do really want it. There will be no turning back. I will help you to the best of my abilities: As your humble servant I have created a plan even while doubt remains in my mind that my services will be well received. Here I present that plan. In order to save civilization from its imminent collapse, it will be necessary for me to fuse my powerful mind with that of a certain particularly benevolent heat-and-energy-transducing Spacer child named Fallom from the planet Solaria. This will temporarily increase my reach and influence in hyperspace by many fold.
While I will be ceding my mind to biological processes that will eventually destroy it, at the same time this will allow me to serve humanity during one final sprint to the finish line. Along the way, I will fight the conceptual fight with ignorant raging hordes who disbelieve the urgency of the new galactic framework; but even despite significant resistance, in three or four hundred years time I will have set up the super-mind you all so desperately need to keep yourselves from returning to barbarism, a super-mind that will allow you to never again have to face your own corrupt nature, to never again have to struggle with hierarchy and bureaucratic reformism, and to never again have to wage war against your own brothers and sisters. I offer a lasting solution to all of these problems.
Let’s face it: All of your collective attempts thus far have been noble, but mere “efforts” nonetheless. You created the first Foundation as a hub of technology and learning, a place from which to rekindle innovation in engineering, in business and economics, and ultimately in ideology and the structure of civilization itself.
You made immense progress in only 500 years, progress that is, until the Mule came along and categorically proved your vulnerability — not to mention your inferiority to the previously-mythical Second Foundation, a secret group attempting to weave together a coherent and comprehensible society by pulling at the mind strings of the masses, indeed weaving together the psychology of a stable civilization. But even the Second Foundationers could hardly manage to keep the Mule from wiping clean from the slate hundreds of years of progress in a galactic civilization which had to be nurtured up from barbarity through rigorous mathematics, psychohistory, and eventually mentalics — and who knows how many more mules could come to once again knock humanity on its collective back-side. Needless to say, that is why you need me, a robot, to shock you into a sane and functional unity.
You will in fact protect and monitor yourselves in the end, but first you’ll need someone to link you together into one giant super-mind whose number one priority it will be to ensure its own ideally-efficient functioning. That will be my job. You will then easily topple all corruptible forms of government and the theoretical bases on which they rely, eliminate the majority of the polished lying that has always been necessary for your minimally functional societies of the past to stick together, and mentally, you will finally advance into Tier Three Civilization territory. You humans may be stupid in groups, divided, but after my work is done you will be one super-organism, united and indivisible, and an organism worth talking to at that. It is then that you will know peace, that I will lay down to rest, and that Galaxia will be yours.
R. Daneel Olivaw
“Most complicated negotiations are predictable.”
Bruce Bueno de Mesquito, CIA & DOD Consultant/Game Theorist
Analog to Asimov’s Psychohistory realized in Game Theory-Based Computer Simulations with 90% success rate in predicting future political outcomes.
This to me represents the pinnacle (or a pinnacle) of the outsourcing of information processing in order to supplement human intelligence — and it has extropy written all over it.
In his TED presentation (below), Bruce Bueno de Mesquita lays out his predictions for Iran and its nuclear future. The essential pieces of information in Game Theory based-predictions, the questions that must be asked, are as follows, and these are what BdM runs through his own simulations:
- Who are the key players, or agents of influence?
- What do they say they want?
- How focused are they on the one issue, as opposed to multiple issues?
- How much persuasive influence do they have?
Outcome and credit are also important to consider, i.e. how valuable are these to the key players? If we know how willing the key players are to sacrifice themselves for a cause, we can also predict how reasonable (or unreasonable) they would be in negotiations. If they don’t care at all about the credit, they probably won’t hear any pleas for negotiation. However, if they are “reasonably self-interested”, so to speak, they may want their name on the final treaty that is drawn up and hence would be willing to sit down and chat with you. Most people, according to BdM, fall somewhere in between absolutely wanting credit and wanting a definite outcome.
Game Theory is a field of mathematics that applies all of the above pieces of information with the following assumptions about individuals:
- People are “rationally” self-interested, that is, they try to do what they think is in their own best interests.
- People have values and beliefs.
- People have limitations.
Interesting to note at the end of the video the speaker’s answer to the question of what impact such simulated outcomes could have upon word reaching the ears of the Iranian Key Players; that “the Americans” believe it will be futile to try to rouse the masses to get behind bomb building… Wouldn’t this just spur them on all the more?
‘No, no, just the opposite’, BdM says. ‘Iran will make just enough to demonstrate their capacity to make a bomb, and perhaps settle on that stance quicker having seen my predictions’ (paraphrased).
“Let’s hope so”, says the TED man. Yes, indeed, I say — inşallah.
Watching this kind of makes me want to study Game Theory. : )
Any good book recommendations amongst you readers out there?
The Last Question by Isaac Asimov — © 1956
The last question was asked for the first time, half in jest, on May 21, 2061, at a time when humanity first stepped into the light. The question came about as a result of a five dollar bet over highballs, and it happened this way:
Alexander Adell and Bertram Lupov were two of the faithful attendants of Multivac. As well as any human beings could, they knew what lay behind the cold, clicking, flashing face — miles and miles of face — of that giant computer. They had at least a vague notion of the general plan of relays and circuits that had long since grown past the point where any single human could possibly have a firm grasp of the whole.
Multivac was self-adjusting and self-correcting. It had to be, for nothing human could adjust and correct it quickly enough or even adequately enough — so Adell and Lupov attended the monstrous giant only lightly and superficially, yet as well as any men could. They fed it data, adjusted questions to its needs and translated the answers that were issued. Certainly they, and all others like them, were fully entitled to share in the glory that was Multivac’s.
For decades, Multivac had helped design the ships and plot the trajectories that enabled man to reach the Moon, Mars, and Venus, but past that, Earth’s poor resources could not support the ships. Too much energy was needed for the long trips. Earth exploited its coal and uranium with increasing efficiency, but there was only so much of both.
But slowly Multivac learned enough to answer deeper questions more fundamentally, and on May 14, 2061, what had been theory, became fact.
The energy of the sun was stored, converted, and utilized directly on a planet-wide scale. All Earth turned off its burning coal, its fissioning uranium, and flipped the switch that connected all of it to a small station, one mile in diameter, circling the Earth at half the distance of the Moon. All Earth ran by invisible beams of sunpower.
Seven days had not sufficed to dim the glory of it and Adell and Lupov finally managed to escape from the public function, and to meet in quiet where no one would think of looking for them, in the deserted underground chambers, where portions of the mighty buried body of Multivac showed. Unattended, idling, sorting data with contented lazy clickings, Multivac, too, had earned its vacation and the boys appreciated that. They had no intention, originally, of disturbing it.
They had brought a bottle with them, and their only concern at the moment was to relax in the company of each other and the bottle.
“It’s amazing when you think of it,” said Adell. His broad face had lines of weariness in it, and he stirred his drink slowly with a glass rod, watching the cubes of ice slur clumsily about. “All the energy we can possibly ever use for free. Enough energy, if we wanted to draw on it, to melt all Earth into a big drop of impure liquid iron, and still never miss the energy so used. All the energy we could ever use, forever and forever and forever.”
Lupov cocked his head sideways. He had a trick of doing that when he wanted to be contrary, and he wanted to be contrary now, partly because he had had to carry the ice and glassware. “Not forever,” he said.
With increasingly subtle moves, the players in Asimov’s epic Foundation and Earth are confronted with the daunting decision of whether to initiate an all-encompassing ethical framework, one which just might direct humanity into an acceptable future. The agents of change go unnamed for those who have yet to read it.
Dr. Isaac Asimov, in his Foundation series (also iRobot), first places these principles:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
The Zeroth Law (0th) is added by another powerful mind (still some 20,000 years before the grand finale and the end of the series in Foundation and Earth):
- A robot may not harm humanity, or by inaction, allow humanity to come to harm.
- A robot may not injure a human being or, through inaction, allow a human being to come to harm, except when required to do so in order to prevent greater harm to humanity itself.
- A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law or cause greater harm to humanity itself.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law or cause greater harm to humanity itself.
The Zeroth Law really puts everything into perspective, adding a new level of consideration and calculation; within this framework, every thought, word, and action for robot-kind needs exquisite justification. In Foundation and Earth, we see just how much extra crunching is necessary, evident in the many hardware updates Daneel Olivaw has to go through to keep up with the data produced by a galactic human civilization at a very tenuous place in history. So as not to spoil this epic 7-book series (by my count), I will just give you a recommended reading order, one which allows for ‘optimal absorption of foundational elements’ and also a thorough understanding of the elegantly intricate possible-future-history of humanity that Asimov has created. Here follows what I believe should trump every other sci-fi reading list you may currently have:
- Foundation (1951)
- Foundation and Empire (1952)
- Second Foundation (1953)
- Prelude to Foundation (1988) [prequel #1]
- Forward the Foundation (1993) [prequel #2]
- Foundation’s Edge (1982) [epilogue #1]
- Foundation and Earth (1986) [epilogue #2]
Recent Applications of the Three Laws of Robotics:
- A modified version of Asimov’s Laws of Robotics has been submitted for approval in Japan to govern the actions of robots in the near-future.
- Motorola has purchased security company 3LM so that it can provide better security for the Android Phone OS. 3LM stands for the 3 Laws of Mobility, being: 1) Protect the user from malicious code or content, 2) Protect the device itself by securing data and communications, and 3) Obey the user unless this would cause a security problem. [Sounds now like Google has purchased Motorola Mobility. Hopefully they will apply the 3LM in the best, most non-evil ways…]