Posts Tagged ‘artificial intelligence’


AI Metasolutions

by adminadam in videos

Meta-SolutionsWe are dealing with information overload. The data overhang in fields like Big Data and Genomics is crushing us. We lack the means to process so much information. Entertainment, if allowed, could eat more of our time than exists in the universal time remaining. How can we personalize and streamline our data, our technology, and our experiences to maximize our time and innovate more?



System complexities such as climate and weather patterns, disease and globalization, macroeconomic and political trends, and other physical processes are almost certainly indomitable and inevitably impossible to synthesize perfectly for any unaided human individual at this time. Perhaps A.I. systems and algorithms, such as those being built by DeepMind at Google, can be relied upon to help us become the masters of our time and of our environments. This is the argument — presented quite compellingly, I might add — which Demis Hassabis effectively advances at talk at the RSA conference in November of 2016.

Demis Hassabis on the benefits to humanity of accelerating technology:


The Singularity: When Computers Overtake Humans

by adminadam in articles

What would it mean for Computers to overtake Humans?

How do we define Humans?

And how do we define Computers as different from Humans?

And are Humans in fact Computers as well?

One might argue that humans, being biological, and computers, being mechanical and electrical, are different. Likewise, one might describe the human brain as a machine, in addition to calling computers ‘machines’.

The question, if each type of brain is trying to understand the other, is who is going to understand who first?

Will computers outsmart us? Will they outfeel us? Will they outdo us in every area of life? Can they be creative? Etc.

Or will humans understand and enhance their own minds through a process of self-learning boosted by machines? Will we end up optimizing our brains? Will we decide to (or act in ways that bring about) a Human Intelligence Explosion? This is the alternative to the idea that we will soon see a Computer Intelligence Explosion, and that by 2029 or so we’ll be in Artificial Brain territory, complete with feelings and creativity and ability and knowledge. Wow! And then others think that maybe the Singularity, the point of no return, will be a culmination of both human and machine intelligence, a merging of the two life forms, where now instead of animals and automatons we’ll have auto-animals and animatrons, humans on cruise control and robots high on weed. And maybe at that point we just won’t really care anymore what happens. It’s all good, man. Pass the spliff.

No, really. The idea is that technology is accelerating, and that that acceleration is accelerating, and that that acceleration is… Well you get the idea. We are innovating up to the point where the innovator will no longer be us — or so it’s thought — because all of our technology is converging on this point — and once that point is passed, the reins will no longer be in our hands; the living, breathing technology itself will be in control, and the computers will quickly orient themselves to do whatever they want.

It’s a scary and disturbing thought that we wouldn’t know (or even be able to know) what such a self-emergent superintelligence would want, what it would be motivated by, or what it would try to do, once it realized it was in control (or at least became aware of itself). And it’s also fascinating, the concept that we will reach a point where history itself will shoot light years out in front of us all of a sudden, where spacetime will be stretched and pulled away at nearly infinite speed. We will essentially be stuck in a black hole without any window into the future as it is being created by the machine. And could we predict what that machine would want? No. That is the terrifying and, for some people, exciting essence of the “A.I. Singularity”.

But what if it doesn’t happen that way?

One alternative, as I’ve mentioned, is that humans incorporate computers completely — will we just overtake ourselves is the question. Perhaps we will become seemlessly integrated with our technology. I could see this happening in a number of ways:

We have already figured out that we can perform basic chemical ‘calculations’ in our bodies, that we can set up chemical triggers. I wrote about this here. Basically we can become DNA-based bio-computers, human, but with added defenses and mechanisms, such as the ability to release aspirin into our own bloodstreams if we have a heart attack. We already have pacemakers that can perform such regulatory functions, and we are moving from mechanical/electrical, to chemical, and eventually to biological (read: stem cell) solutions to wear-and-tear and failure of various parts of ourselves. The next step is actually just a subtle shift toward having machines do more and more of our thinking for us. Where before we had physical encyclopedias now we have google and wikipedia; where now we have instant smartphone messaging, tomorrow we may have digital telepathy. Of course here it’s important to point out that this goes beyond offloading or accelerating current functions of our brains and bodies — it’s a phase change to a new level of human effectiveness, insight, and ability — we are now doing more and more that would have been impossible before. And to draw out this human intelligence trend, future versions of ourselves will do equally impossible things in our current conception! That’s the idea behind a fused Human/Computer Singularity.

I implore you to read Kevin Kelly’s What Technology Wants if this whets your appetite for the study of technology and where it’s going.