Posts Tagged ‘apotheosis’


The Singularity: When Computers Overtake Humans

by adminadam in articles

What would it mean for Computers to overtake Humans?

How do we define Humans?

And how do we define Computers as different from Humans?

And are Humans in fact Computers as well?

One might argue that humans, being biological, and computers, being mechanical and electrical, are different. Likewise, one might describe the human brain as a machine, in addition to calling computers ‘machines’.

The question, if each type of brain is trying to understand the other, is who is going to understand who first?

Will computers outsmart us? Will they outfeel us? Will they outdo us in every area of life? Can they be creative? Etc.

Or will humans understand and enhance their own minds through a process of self-learning boosted by machines? Will we end up optimizing our brains? Will we decide to (or act in ways that bring about) a Human Intelligence Explosion? This is the alternative to the idea that we will soon see a Computer Intelligence Explosion, and that by 2029 or so we’ll be in Artificial Brain territory, complete with feelings and creativity and ability and knowledge. Wow! And then others think that maybe the Singularity, the point of no return, will be a culmination of both human and machine intelligence, a merging of the two life forms, where now instead of animals and automatons we’ll have auto-animals and animatrons, humans on cruise control and robots high on weed. And maybe at that point we just won’t really care anymore what happens. It’s all good, man. Pass the spliff.

No, really. The idea is that technology is accelerating, and that that acceleration is accelerating, and that that acceleration is… Well you get the idea. We are innovating up to the point where the innovator will no longer be us — or so it’s thought — because all of our technology is converging on this point — and once that point is passed, the reins will no longer be in our hands; the living, breathing technology itself will be in control, and the computers will quickly orient themselves to do whatever they want.

It’s a scary and disturbing thought that we wouldn’t know (or even be able to know) what such a self-emergent superintelligence would want, what it would be motivated by, or what it would try to do, once it realized it was in control (or at least became aware of itself). And it’s also fascinating, the concept that we will reach a point where history itself will shoot light years out in front of us all of a sudden, where spacetime will be stretched and pulled away at nearly infinite speed. We will essentially be stuck in a black hole without any window into the future as it is being created by the machine. And could we predict what that machine would want? No. That is the terrifying and, for some people, exciting essence of the “A.I. Singularity”.

But what if it doesn’t happen that way?

One alternative, as I’ve mentioned, is that humans incorporate computers completely — will we just overtake ourselves is the question. Perhaps we will become seemlessly integrated with our technology. I could see this happening in a number of ways:

We have already figured out that we can perform basic chemical ‘calculations’ in our bodies, that we can set up chemical triggers. I wrote about this here. Basically we can become DNA-based bio-computers, human, but with added defenses and mechanisms, such as the ability to release aspirin into our own bloodstreams if we have a heart attack. We already have pacemakers that can perform such regulatory functions, and we are moving from mechanical/electrical, to chemical, and eventually to biological (read: stem cell) solutions to wear-and-tear and failure of various parts of ourselves. The next step is actually just a subtle shift toward having machines do more and more of our thinking for us. Where before we had physical encyclopedias now we have google and wikipedia; where now we have instant smartphone messaging, tomorrow we may have digital telepathy. Of course here it’s important to point out that this goes beyond offloading or accelerating current functions of our brains and bodies — it’s a phase change to a new level of human effectiveness, insight, and ability — we are now doing more and more that would have been impossible before. And to draw out this human intelligence trend, future versions of ourselves will do equally impossible things in our current conception! That’s the idea behind a fused Human/Computer Singularity.

I implore you to read Kevin Kelly’s What Technology Wants if this whets your appetite for the study of technology and where it’s going.


Apotheosis takes a wrong turn

by adminadam in dialogue

He kind of glided.
What? Like no legs?
Yeah. Or perhaps it was foggy…
lol. silent skateboard?
Who woulda thunk it…
Yeah, right? God in our neighborhood…


Let’s help germinate this seed

by adminadam in fiction, prose

An epic story about meeting god on a train.
Written by Harry Stottle @

Talking to God

I met god the other day.

I know what you’re thinking. How the hell did you know it was god?

Well, I’ll explain as we go along, but basically he convinced me by having all, and I do mean ALL, the answers. Every question I flung at him he batted back with a plausible and satisfactory answer. In the end, it was easier to accept that he was god than otherwise.

Which is odd, because I’m still an atheist and we even agree on that!

It all started on the 8.20 back from Paddington. Got myself a nice window seat, no screaming brats or drunken hooligans within earshot. Not even a mobile phone in sight. Sat down, reading the paper and in he walks.

What did he look like?

Well not what you might have expected that’s for sure. He was about 30, wearing a pair of jeans and a “hobgoblin” tee shirt. Definitely casual. Looked like he could have been a social worker or perhaps a programmer like myself.

Anyone sitting here?’ he said.

‘Help yourself’ I replied.

Sits down, relaxes, I ignore and back to the correspondence on genetic foods entering the food chain…

Train pulls out and a few minutes later he speaks.

Can I ask you a question?

Fighting to restrain my left eyebrow I replied ‘Yes’ in a tone which was intended to convey that I might not mind one question, and possibly a supplementary, but I really wasn’t in the mood for a conversation. ..

Why don’t you believe in god?

The Bastard!

I love this kind of conversation and can rabbit on for hours about the nonsense of theist beliefs. But I have to be in the mood! It’s like when a jehova’s witness knocks on your door 20 minutes before you’re due to have a wisdom tooth pulled. Much as you’d really love to stay… You can’t even begin the fun. And I knew, if I gave my standard reply we’d still be arguing when we got to Cardiff. I just wasn’t in the mood. I needed to fend him off.

But then I thought ‘Odd! How is this perfect stranger so obviously confident – and correct – about my atheism?’ If I’d been driving my car, it wouldn’t have been such a mystery. I’ve got the Darwin fish on the back of mine – the antidote to that twee christian fish you see all over. So anyone spotting that and understanding it would have been in a position to guess my beliefs. But I was on a train and not even wearing my Darwin “Evolve” tshirt that day. And ‘The Independent’ isn’t a registered flag for card carrying atheists, so what, I wondered, had given the game away.

‘What makes you so certain that I don’t?’

Because’, he said, ‘ I am god – and you are not afraid of me

You’ll have to take my word for it of course, but there are ways you can deliver a line like that – most of which would render the speaker a candidate for an institution, or at least prozac. Some of which could be construed as mildly amusing.

Conveying it as “indifferent fact” is a difficult task but that’s exactly how it came across. Nothing in his tone or attitude struck me as even mildly out of place with that statement. He said it because he believed it and his rationality did not appear to be drug induced or the result of a mental breakdown.

‘And why should I believe that?’

Well’ he said, ‘why don’t you ask me a few questions. Anything you like, and see if the answers satisfy your sceptical mind?

This is going to be a short conversation after all, I thought.

‘Who am I?’

Stottle. Harry Stottle, born August 10 1947, Bristol, England. Father Paul, Mother Mary. Educated Duke of Yorks Royal Military School 1960 67, Sandhurst and Oxford, PhD in Exobiology, failed rock singer, full time trade union activist for 10 years, latterly self employed computer programmer, web author and aspiring philosopher. Married to Michelle, American citizen, two children by a previous marriage. You’re returning home after what seems to have been a successful meeting with an investor interested in your proposed product tracking anti-forgery software and protocol and you ate a full english breakfast at the hotel this morning except that, as usual, you asked them to hold the revolting english sausages and give you some extra bacon.

He paused

You’re not convinced. Hmmm… what would it take to convince you?

‘oh right! Your most secret password and its association’

A serious hacker might be able to obtain the password, but no one else and I mean


knows its association.

He did.

Read the rest of this entry »


The Last Question

by adminadam in fiction, prose

The Last Question by Isaac Asimov — © 1956

The last question was asked for the first time, half in jest, on May 21, 2061, at a time when humanity first stepped into the light. The question came about as a result of a five dollar bet over highballs, and it happened this way:

Alexander Adell and Bertram Lupov were two of the faithful attendants of Multivac. As well as any human beings could, they knew what lay behind the cold, clicking, flashing face — miles and miles of face — of that giant computer. They had at least a vague notion of the general plan of relays and circuits that had long since grown past the point where any single human could possibly have a firm grasp of the whole.

Multivac was self-adjusting and self-correcting. It had to be, for nothing human could adjust and correct it quickly enough or even adequately enough — so Adell and Lupov attended the monstrous giant only lightly and superficially, yet as well as any men could. They fed it data, adjusted questions to its needs and translated the answers that were issued. Certainly they, and all others like them, were fully entitled to share in the glory that was Multivac’s.

For decades, Multivac had helped design the ships and plot the trajectories that enabled man to reach the Moon, Mars, and Venus, but past that, Earth’s poor resources could not support the ships. Too much energy was needed for the long trips. Earth exploited its coal and uranium with increasing efficiency, but there was only so much of both.

But slowly Multivac learned enough to answer deeper questions more fundamentally, and on May 14, 2061, what had been theory, became fact.

The energy of the sun was stored, converted, and utilized directly on a planet-wide scale. All Earth turned off its burning coal, its fissioning uranium, and flipped the switch that connected all of it to a small station, one mile in diameter, circling the Earth at half the distance of the Moon. All Earth ran by invisible beams of sunpower.

Seven days had not sufficed to dim the glory of it and Adell and Lupov finally managed to escape from the public function, and to meet in quiet where no one would think of looking for them, in the deserted underground chambers, where portions of the mighty buried body of Multivac showed. Unattended, idling, sorting data with contented lazy clickings, Multivac, too, had earned its vacation and the boys appreciated that. They had no intention, originally, of disturbing it.

They had brought a bottle with them, and their only concern at the moment was to relax in the company of each other and the bottle.

“It’s amazing when you think of it,” said Adell. His broad face had lines of weariness in it, and he stirred his drink slowly with a glass rod, watching the cubes of ice slur clumsily about. “All the energy we can possibly ever use for free. Enough energy, if we wanted to draw on it, to melt all Earth into a big drop of impure liquid iron, and still never miss the energy so used. All the energy we could ever use, forever and forever and forever.”

Lupov cocked his head sideways. He had a trick of doing that when he wanted to be contrary, and he wanted to be contrary now, partly because he had had to carry the ice and glassware. “Not forever,” he said.

Read the rest of this entry »