monstermind1-e1407971417338Several luminaries of the modern age have written a letter to the United Nations in the past twenty four hours asking the UN to support a ban on autonomous weapons. This has been picked up by the mainstream media in the usual fashion with comparisons of SkyNet along with confusing autonomous, with artificial, and super intelligence.

Hundreds of the world’s leading robotics and artificial intelligence experts have released an open letter urging the United Nations to support a ban on lethal autonomous weapons systems.

The letter is signed by 700 researchers and more than 600 other experts including Elon Musk, famed physicist Steven Hawking, Apple co-founder Steve Wozniack, Skype co-founder Jaan Talinn and activist philosopher Noam Chomsky. – Source

An autonomous system does not display a great deal of intelligence. You effectively program it to look for set criteria and then act based on that profile. So, for an autonomous drone you feed it a range of parameters including location, uniform type, weapon type, humanoid profile, drone profiles, and other actions and then it kills the target it.

The reason why autonomous killing systems are going to become popular is cost and proliferation. Rather than a single man manning a single drone, you can now get drone swarms that operate in their own right. They can be insanely cheap compared to usual strike weapons, tens of thousands as opposed to hundreds of thousands.

The issue with autonomous platforms is that they are logically programmed. There is no room for anything else. In it’s simplest form a mine is an autonomous platform. It sits there and waits for someone to stand on it, or not.

The drone platform by itself is interesting and already could provide a method of assassination or murder with no recourse on the operator. In this news report we see that a U.S. man has mounted a hand gun on a standard drone. From a murder perspective this is genius. A target can be killed remotely and the drone can simply fly off in to the nearest ocean never to be seen again. There is no evidence to link the operator to the weapon.

The next stage of autonomous weapon deliver is then artificial intelligence. Let’s define that:

The theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.

Human intelligence is the key phrase here. It is a computer system that thinks like a human does, not in terms of prejudice and behavioural disposition (emotions driven), but with the same processing power that a human has. When you overlay this level of processing intelligence on to autonomous systems, the power of those systems is increased exponentially.

In other words, a true artificial intelligence, or more than one instance of it, could operate together to manage an entire war utilising nothing more than autonomous vehicles whether airborne, ground moving, or seagoing.

Autonomous machines require humans to input parameters by which they can act. Artificial intelligence can change those parameters in real-time to suit circumstances. Applying artificial intelligence to weapons systems is already underway as the Snowden revelations have pointed out.

Monstermind is allegedly an NSA bot system that analyses internet based attacks and responds in kind.

The NSA whistleblower says the agency is developing a cyber defense system that would instantly and autonomously neutralize foreign cyberattacks against the US, and could be used to launch retaliatory strikes as well. – Source

Monstermind takes a gigantic amount of metadata and analyses cyber-attacks on the U.S. and her allies. The first level of response is to autonomously shut those nodes and bots down where it can. So it it sees an attack originating in the Middle East against U.S. infrastructure, it launches a retaliatory cyber-attack.

Here’s where it gets interesting. Monstermind is allegedly being linked to autonomous weapon systems. If the threat is high enough, or can’t be shut down with traditional cyber-countermeasures, then it will launch physical countermeasures, such as drones, or missiles, to destroy the originating attack. This is not controlled by humans, it is controlled by machines.

Finally, we have something called superintelligence.

Superintelligence is generally related to a machine that possesses an intelligence far beyond ours, but not limited by cognitive reasoning (what’s makes us human in short).

Humans have reasoned in the past few years that we should pursue superintelligence, and we have been, to the tune of billions in research, given that any superintelligence won’t kill us, and, even if it wanted too, we could put safeguards in place to stop it.

Those safeguards rely on a simple theory, which is, if the superintelligence can’t attach to the outside world, it if only exists inside a computer, and that computer has absolutely no way to connect to anything else, then so what, even if it is a mad god that wants to destroy us (and could), it can’t get out of the box.

So we put it to the test, you can read about it in the link above.

A man chose to be the superintelligence, his job was to trick the operator into letting him out of the box. He did, more than once. If a human intelligence can do it, the theory says, then a superintelligence will be able too, easily.

There were three more AI-Box experiments besides the ones described on the linked page, which I never got around to adding in. People started offering me thousands of dollars as stakes—”I’ll pay you $5000 if you can convince me to let you out of the box.” They didn’t seem sincerely convinced that not even a transhuman AI could make them let it out—they were just curious—but I was tempted by the money. So, after investigating to make sure they could afford to lose it, I played another three AI-Box experiments. I won the first, and then lost the next two. And then I called a halt to it. I didn’t like the person I turned into when I started to lose.

Rolling back to the beginning, we have hundreds of leading minds who are warning the U.N. not to open the door to autonomous weapon systems. A warning that is likely to be completely ignored.

The reality is, it’s too late, the horse has bolted and there is very little control over it.

6 comments

  1. As you say Ian, a mine is an ‘autonomous weapon’, so banning is a non starter. More generally we have a very long way to go to achieve true artificial intelligence (whatever that means). It probably won’t happen until we have ubiquitous quantum computing with effectively analogue rather than digital logic.

  2. Leading intellectual figures say that uncontrolled AI could be a far greater risk to the human race than nuclear weapons. The great challenge is that there would be no way to control AI implementations like that of nuclear material and weapon proliferation, with AI having the Internet as a conduit of “AI transport”. Worse, you would need an AI entity to shut down another AI entity in all reality, thus propagating reasons for government backed AI programs as well, and with haste. This is an ugly future path indeed, if AI is weaponised purely for cost-cutting reasons.

      1. Or AI wars so long that humans forget what started them, or why there is even conflict that it becomes a way of life.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s