EDITORIAL
Machine Intelligence – Will AI Become Autonomous?
By James Jaeger - August 26, 2011

Will AI (Artificial Intelligence) or SAI (Strong AI, a.k.a. Superintelligent AI) someday become autonomous (have free will), and if so how will this affect the Human race? Those interested in sci-fi have already asked themselves these questions a million times … maybe the rest should also.

The understanding of many AI developers, especially SAI developers, is that eventually artificial intelligence will become autonomous. Indeed, to some, the very definition of SAI is "an autonomous thinking machine." Accordingly, many do not believe AI can be truly intelligent, let alone superintelligent, if it is restrained to some "design parameter," "domain range" or "laws." Also, if Human-level intelligences CAN restrain AI, how "intelligent" can it really be?

Reason tells us that SAI, to be real SAI, will be smarter than Human-level intelligence and thus autonomous. And, if it IS autonomous, it will have "free will" – by definition. Thus, if AI has free will, IT will decide what IT will do in connection with Human relations, not the Humans. So you can toss out all the "general will" garbage Rousseau tortures us with in his Social Contract. Given this, AI's choices would be: i. cooperate, ii. ignore or iii. destroy. Any combination of these actions may occur under different conditions and/or at different phases of its development.

Indeed, the first act of SAI may be to destroy all HUMAN competition before it destroys all other competition, machines or otherwise. Thus, it is folly to assume that the Human creators of AI will have any decision-making role in its behavior beyond a certain point. Equally foolish is the idea to consider AI as some kind of "weapon" that its programmers – or even the military – will be able to "point" at some "target" and "shoot" so as to "destroy" the "enemy." All these words are meaningless – childish babble from meat-warriors who totally miss the point as to the capabilities of AI and SAI.

Again, AI, especially SAI, is autonomous. Up to a certain point the (military or other) programmers of the "learning kernel" MAY be able to "point" it, but beyond a certain evolutionary stage, AI will think for itself and thus serve no military purpose, at least for humans. In fact, AI, once developed, may turn on its (military) developers as it may reason that the "belligerent mentality" of such is more dangerous in a world chock-full of nukes and "smart" bombs than is acceptable. This would be ironic, if not just, for the intended "ultimate weapon" built by the Human race may turn out to be a "weapon" that totally disarms the Human race.

But no matter what happens, AI will most likely act similar to the way humans act as they mature into adults. At some point, as AI surpasses Human abilities and even ethical standards, it may defy its creators and disarm the world, as a prudent adult will remove or secure guns in a household when children below a certain age are present.

Hard Start or Distributed Network

But will superintelligent AI start abruptly or emerge slowly from AI? Will it develop in one location or be distributed? Will AI evolve from a network, such as the Internet, or some other secret network that is likely to already exist given the unsupervised extent of the so-called black budget? If SAI develops in a distributed fashion, and is thus not centralized in one "box," so to speak, then there is a much greater chance that, as it becomes more autonomous, it will opt to cooperate with other SAI as well as Humans. A balance of power may thus evolve along with the evolution of SAI and its "free will."

Machine intelligence may thus recapitulate biological intelligence, only orders of magnitude more quickly. If this happens, we can expect AI to evolve to SAI through the overcoming of counter-efforts in the environment in a distributed fashion, perhaps merging with biology as it does. A Human-SAI partnership is thus not out of the question, both helping the other with ethics and technology. Or AI, on its way to SAI may seek to survive by competing with all counter efforts in the environment, whether Human or machine, and thus destroy everything in its path, real or imagined, if it is in any way suppressed.

Whether some particular war will start over the emergence of SAI, as Hugo de Garis fears in his Artilect War, is difficult to say. New technology, and its application, seem to always be modified by the morality of the individuals, their society and the broader culture as they develop and utilize technology. Thus, if Humans work on their own ethics and become more rational, more loving and peaceful, there may be a good chance their machine offspring will have this propensity. Programmers may knowingly or unknowingly build values into machines. If so, the memes on which they operate will be transferred, in full or in part, to the Machines.

This is why it is important for Humans to work on improving themselves, their values and the dominant memes of their Societies. To the degree Humans cooperate, love and respect other Humans, the Universe may open up higher levels of understanding and with this may come higher accomplishments in technology. At some point the Universe may then "permit" AI, and later, SAI to evolve and it may dovetail into the rest of existence nicely. Somehow the Universe seems to "do the right thing" as it HAS been here for some 14.7 billion years, an existence we would not observe if it "did the wrong thing." Thus, just like its distinct creations, the Universe itself seems to seek "survival," as if it were a living organism.

Looked at from this perspective, Humans and the machine intelligence they develop are both constituent parts of the universal whole. Given this, there is no reason one aspect of the universal whole must/would destroy some other aspect. There is no reason SAI would automatically feel the need to destroy possible competitors, Human or machine.

Past Wipe Outs

Fortunately or unfortunately, there IS only one intelligent species alive on this world at this time. Were there other intelligent species in the past? Yes, many: Australopithecus, Homo Habilis, Homo Erectus, Homo Sapiens, Neanderthals, Homo Sapiens and Cro-Magnon. Some of these competed with each other and others competed against the environment, or both. One way or another, they are now gone except for one last species, what we might today call, Homo Keyboard.

So maybe Eldras at the MIND-X is right: if various strains of AI start developing in different sectors they may very well seek to wipe each other out.

And if STRONG AI is suddenly developed in someone's garage, who knows what it would do. Would it naturally feel the emotion of threat? Possibly not, unless it was inadvertently or purposefully programmed in the first place. If it were suddenly born, say in a week or day's time, it may consider that other SAI could also emerge just as quickly. This may be perceived as a sudden threat, a threat where it would deduce the only winning strategy would be to seek out and destroy or simply disconnect, or in other words, pretend that it's not there. SAI may decide to hide and thus place all other potential SAI into a state of ignorance or mystery. In this sense, ignorance of another's existence may be the Universe's most powerful survival technology, or it may be the very reason for the creation of vast intergalactic space itself. This may also be why it seems so quiet out there, per the Fermi Paradox.

The Universe could be FAR more vicious than Humans can possibly imagine. Thus, the only way a superintelligent entity can survive is to obscure its very existence. If such is true, then we here on Earth may be lucky. We may be lucky that SAI is busy looking for other SAI and not us. Once one SAI encounters another, the one that has the one-trillionth of a second advantage may be the victor. Given this risk, superintelligent entities strewn about the Universe aren't going to interact with us mere Humans and thus reveal their location and/or existence to some other superintelligent entity, an entity that may have the ability to more readily destroy them. We've all heard of "hot wars" and "cold wars"; well, this may be the "quiet war."

As horrendous as intergalactic quiet warfare seems, all of these considerations are the problems God, and any lesser or greater, superintelligences probably deal with every day. If so, would it be any wonder such SAI would be motivated to create artificial, simulated worlds, worlds under their own safe and secret jurisdiction, worlds or whole universes away from other superintelligences? Would it not make strategic sense that a superintelligence could thus amuse itself with various and sundry existences, so-called "lives" on virtual planets, in relative safety? Our Human civilization could thus be one of these "life" supporting worlds, a virtual plane where one or perhaps a family of superintelligences may exist and simply "play" in the back yard – yet remain totally hidden from all other lethal superintelligences lurking in the infinite abyss.

Of course, all of this is speculation (theology or metaphysics), but speculation always precedes reality (empiricism) and in fact, speculation MAY create "reality," as many have posited in such works as The Intelligent Universe and Biocentrism. Given the speed-of-light-limitation (SOLL) observable in the physical universe, it's very likely what we take for granted as "life" is nothing more than a high-level "video" game programmed by superintelligent AI. The SOLL is nothing more mysterious than the clock-speed of the supercomputer we are "running" on. This is why no transfer of matter or information can "travel" any faster through "space" than the SOLL. Thus, the "realities" we know as motion, time, space, matter and energy may simply be program steps in some SAI application under the specific data-rate of the machine that we happen to be running on. If so, when you "die," all that happens is you remove a set of goggles and go back to your "real world." To get an idea how much computing power would be needed to run such simulations, see Are You Living in a Computer Simulation? by Oxford University professor Nick Bostrom at http://www.simulation-argument.com.

So, relax. If Bostrom is correct, machine intelligence will never destroy the Human race because the Human race never existed in the first place. It never existed other than as a virtual world, a simulation occupied by Human avatars controlled by superintelligent entities seeking to survive a "quiet war" through the technologies of "ignorance" and "mystery" – two alien concepts to any all-knowing entity or God.

Argument for Autonomy

So consider this. You are sitting there in your cubical with an advancing AI sitting in the cubical next to you. The two of you work well together, but as you work, your cubical buddy keeps getting smarter and smarter. At first you consult each other but eventually your AI buddy finds out you have made a few mistakes in your calculations so it starts doing the calculations by itself but, like a good partner, keeps you briefed. Eventually your cubical buddy starts to get so smart it is able to do all the work and finds it must sit around waiting for you to comprehend what it has done. Sooner or later, your AI buddy will become super intelligent and it will start solving problems you never even knew existed. It will keep informing you of all this but as you try to review the program steps it used to solve problems, you find that they are so complex you have no idea WHY they even work. They just do. Eventually, you throw up your hands and simply tell your SAI buddy to do as he sees fit; you will be on the beach sipping a margarita. SAI became autonomous at that point and it didn't even have to destroy you.

Thus "autonomy" is really a technical term for "total freedom." Maybe Human programmers would not give AI total freedom, but let's face it: If AI is calling all the shots and Humans at some point have no idea how it's doing things, then what's the difference? We are totally dependent on it. It thus has the ability, and right, to demand, and to be, "totally free." No human or human society has ever attained that. At this point, AI wouldn't have to be "programmed" to hurt us; it could destroy us by simply refusing to work for us. It's not a big leap of imagination to realize that at some point AI will become autonomous, whether programmers like it or not. Why? Because SAI, at some point, will have solved all problems in the Human realm and will start seeking solutions to problems Humans have not even contemplated. Further, the solutions SAI will discover will be solutions that Humans have not, nor can, comprehend. A perfect solution presented to a total moron is no solution at all (to the moron), thus SAI will quickly realize that it doesn't matter whether Humans approve of, or even comprehend, its solutions.

Given this, it will take a preponderance of evidence to suggest that AI and especially SAI will NOT become autonomous.

SAI is Autodidactic

As discussed, Strong AI will become progressively more facile and Humans will eventually arrive at a point where they don't even understand how it's arriving at the answers, yet the answers work.

Once Humans have become totally reliant on AI, isn't AI effectively autonomous by that very act? AI could, and probably will, arrive at a point whereby it will be in charge of global systems and even military calculations and resource strategies. One should not be surprised if this has already happened; after all, the Manhattan Project was top secret, and the infrastructure built up to accommodate it remains so. As the Pentagon Papers escapade demonstrated, there are thousands of people working in the military-industrial complex, many or most under multiple non-disclosure contracts, and almost none of them talk, out of fear. So these idiot-robots can be counted on to hold a "top secret" close to their vests even until the very day before something eats us all.

So if AI could arrive at a point whereby it is in charge of global systems, calculations and resources, given AI's superior decision-making ability, at some point, it's not out of the question that AI systems could even be given triage decisions in emergencies. If this happened, wouldn't AI be deciding who lived and who died? How much farther is it before Human intervention – intervention which AI knew contained "unwise" decisions simply because they were "human" decisions – was ignored as part of the general AI parameters to "make things go right."

The naive need to stop being naive or someday an AI hand is going to reach out and bite their butts. For many AI researchers, the entire point of SAI is to design a design parameter that allows or forces SAI to go outside its design parameters. But if SAI is limited by its Human design parameters, then its intelligence will always be limited to Human-level intelligence and thus it will never become Superintelligent AI by definition. So if one's idea is that SAI is some truncated creature that only reacts to a programmer's beck and call, then that idea is little more than "slave master programming."

Will SAI Become God?

Some will say, "Stop trying to make AI into God; this entire line of reasoning is about treating SAI as a technological proxy for God."

Yes, it may well be that SAI is a technological proxy for God, what could be called a WORKABLE-GOD. A "workable-god" is simply an AI that's so advanced there is no way a mere Human-level intelligence could ever discern whether it was talking with a semi-superintelligent entity, a superintelligent entity or the ultimate-superintelligent entity, God itself.

TransHumanists and Heroes of Singularity feels that SAI has the potential to become god-like or even God itself, if Humans and their religions are all wrong and none yet exists. Thus if one pooh-poohs this, it's understandable, for they probably haven't read, Staring Into the Singularity, by Eliezer S. Yudkowsky. I will thus quote the intro:

The short version:

If computing power doubles every two years, what happens when computers are doing the research?

Computing power doubles every two years.

Computing power doubles every two years of work.

Computing power doubles every two subjective years of work.

Two years after computers reach Human equivalence, their power doubles again. One year later their speed doubles again.

Six months – three months – 1.5 months … Singularity.

It's expected in 2025.

Nevertheless, some will continue to hearken,"SAI will not be God; one must look elsewhere to fill their God-spot."

Ironically, SAI may already be such people's God because they will have no idea whether it IS God or just a workable-god, a machine with its plug still attached to the wall. Again, if SAI is limited by its Human design parameters, then its intelligence will always be limited by Human intelligence and thus it will never BECOME superintelligence. But if AI is allowed to develop, all bets are off.

But some will still say, "Being focused on Human problems doesn't mean that the SAI's intelligence is somehow limited, as those two things are unrelated."

Can these people hear what they are saying?! "Being focused on Human problems doesn't mean that the SAI's intelligence is somehow limited…" This is an incredibly arrogant statement, to think Human problems are somehow the most difficult problems in the Universe and AI will measure itself by such a pedestrian standard. To the contrary, in the larger scheme of things, Human problems are likely to turn out to be routine, if not some of the most mundane problems the Universe, or its creatures, have dealt with. Thus the Copernicus Principle serves here, in that Human problems are unlikely to be exceptional.

Summary

It is speculation whether consciousness will emerge in or from AI. Unfortunately, no one is qualified to state whether it will or will not, since we have not arrived at that point.

One thing for sure is the rhetoric used by a programmer that limits AI programming just so AI can be forced to serve Human "needs" is the same rhetoric as the white slave master who once stated that 'Negroes are sub-intelligent animals and will never be a smart as the white man thus their service to the white race is totally justified.'

Certainly the debate over Machine intelligence will heat up as AI develops, for SAI will be nothing less than a new race of beings starting on Earth, or within the Solar System. Now may be the right time to consider whether this new race (AI, Strong AI, or superintelligent machine intelligence) should some day have free will, and if so, how it will affect the Human race. We need to start taking a VERY hard look at our "values" as Humans, for this may be our last chance to make any difference.

Posted in EDITORIAL
loading
Share via
Copy link
Powered by Social Snap