Artificial intelligence will reach human levels by around 2029. Follow that out further to, say, 2045, we will have multiplied the intelligence, the human biological machine intelligence of our civilization a billion-fold.

Ray Kurzweil[1]

[This is Phil, blog philosopher and occasional commenter on technology, science fiction and the scientific method. Fred called the other day, much disturbed; he’s been looking at the research programs of our country’s much esteemed Defense Advanced Research Programs Agency [DARPA], and he’s found worrisome things and trends. Of course there’s always reason to worry when the human animal tries something new; that’s evolution, I guess. First we had antibiotics, now we have antibiotic-resistant bacteria. Then we developed nuclear power, and early-on found a way to make it go bang! But despite our valiant efforts – remember the “Atoms for Peace” program?[2] – its peaceful uses are few, expensive and often dangerous. If you don’t believe me, reflect for a moment on Chernobyl,[3] and the more recent experience of the Japanese.[4] Does anyone want to do that again?

Anyway, recently Fred sat me down for several hours and explained his views on robots and their danger to us. I was impressed, and asked him to pen a blog for us, but he’s very busy right now; so I agreed to write it for him. I don’t necessarily agree with everything he says but, as usual, he’s interesting.]

Monsters

Antibiotics and nuclear power are old news. Fred’s more concerned about the new stuff, that is, about the startling progress we’ve made in building machines that mimic, or duplicate, human mental processes. Of course, people have always worried that one day mankind might create something that might destroy us. Mary Shelley’s Frankenstein is an example of that. Shelley’s monster was a human-like thing made up of sewn-together body parts robbed from graves. The Terminator movie franchise is a bit more modern but essentially the same, except the monsters aren’t biological; they’re essentially an outgrowth of modern computer technology. If I recall correctly, the terminators are part of a computerized defense system that goes rogue.

So you can see why Fred might be concerned. He’s read quite a bit of sci-fi, as have I, and the rebellious robot is a favorite meme of the genre.[5] And Fred’s not the only one to be concerned. Stephen Hawking, for one, has warned that developing artificial intelligence might be very bad for humanity.[6] And reportedly Bill Gates, of Microsoft fame, has said pretty much the same thing,[7] and so has Elon Musk, the guy behind Tesla cars.[8] These people know a thing or two about intelligence, human or otherwise.

The Three Laws

Fred says, truthfully, that robots in literature are not always hostile. His favorite examples of non-monster robots are the ones portrayed by Isaac Asimov in a series of novels he began in the 1950’s. Asimov posited a future in which the vast majority of humans were trapped on a badly overcrowded planet Earth, while a minority had escaped to 50 nearby star systems. The off-planet settlers had technology superior to that of the home planet, so much so that it allowed them to develop “humaniform” robots, i.e., ones that substantially mimicked human beings

Asimov’s first three “robot” novels[9]dealt with the efforts of an earth detective and an off-planet robot to solve a series of murders. It’s entertaining but, more to the point, it deals at length with the interaction of a human with a robot, and suggests a set of controls humans might impose on their mechanical friends. These are, in order of priority, that (i) a robot shouldn’t harm humans, (ii) should obey the orders of humans, and (iii) should protect its own existence.[10] They’re also known as Asimov’s Three Laws of Robotics.

Asimov’s first robot novels were written 50 or more years ago. Neither Fred nor I know how he proposed to educate a robotic “brain” to accept such limitations, but today Fred sees the “three laws” as software that should be added to the programming of autonomous machines. The problem, he says,  is that DARPA is doing a lot of good work in creating devices, etc., that mimic human functions, but apparently nothing to add a conscience to them.

The DARPA Programs

Some of these projects are astonishing. There’s one, for example, aimed at developing communication devices [aka radios, etc.] that will operate at any time or place, and under any conditions. DARPA calls this Adaptive RF Technology [ART], [11] and pretty much implies that the adjustments will be made automatically when necessary, i.e. without human intervention. That would be useful, I suppose, if one is operating a drone from a distance. The drone is on the spot, knows the local conditions, and would be much more useful if it had a “cognitive” radio that could solve communications problems, rather than rely on a remote human pilot to do the job.

Then there are the machines, etc., that will do the same things humans do, only better and faster.  In the Cyber Grand Challenge, for example, DARPA sponsored [sponsors?] “a major competition to develop advanced autonomous systems that can detect, evaluate, and patch software vulnerabilities before adversaries have a chance to exploit them.” Why should machines do this sort of thing? Well, because humans just aren’t fast enough. [12]  But the humans at this stage are very much involved in designing their own successors. 

Or, on that same theme, consider model-making. That’s the kind of thing that economists and other people do to make sense of voluminous data and, incidentally, also to make a living. The people on talk radio, of course, use very simple models, usually something like, “if supply goes up, then, then prices will go down,” completely ignoring the many other things that might affect price. DARPA says, we humans are awash in data but “what’s missing are empirical models of complex processes that influence the behavior and impact of those data elements.”  At the end of the day, it’s really difficult to create good models that reliably predict things that actually happen. So how do we solve that problem? DARPA has an answer – its Data Driven Discovery of Models program – which will free us from all that drudgery. [13] Using artificial intelligence, it will analyze data and develop models for us.

Then there’s the astonishing work being done in creating artificial limbs for the injured. Versions currently in production are battery powered, of “near natural size and weight,” and allow for “extremely dexterous” arm and hand movement. [14] Also we’re learning how to convey touch and pressure through these things[15] and are experimenting to develop one that can be controlled by the user’s brain. [16]

This information comes directly from DARPA, mostly in the form of news releases. One of the odder ones deals with a separate program to develop self-healing construction materials. DARPA is now studying the use of living materials – ones that can be grown and regenerate – for these purposes. “DARPA is launching the Engineered Living Materials (ELM) program with a goal of creating a new class of materials that combines the structural properties of traditional building materials with attributes of living systems ….”[17] Could this line of research also lead us to materials that might be incorporated into machines, say cars, drones and robots?

Protecting Us from Our Creations

I suppose you can classify that last paragraph as speculation, but the greater point is not. Our scientists experiment with artificial intelligence, and are making great progress to boot.  They’re developing autonomous machines – ones that can operate on their own[18] – and giving them mental capabilities that in some cases exceed those of ordinary humans. Others are working hard to fabricate artificial limbs for the wounded, but the same technology might be adaptable to other platforms, i.e. to mobile robots. Add construction materials to the mix, and it’s not impossible to believe that someday you and I might be dealing with mobile, bi-pedal, autonomous robots. Ask Arnold Schwarzenegger; he’s already been there in the Terminator movies.

If such things are possible, why aren’t scientists also working on ways to protect us from the occasional wayward or hostile robot? Why isn’t somebody experimenting with, for example, a software version of Asimov’s Three Laws? I can think of one reason, of course; a lot of the current development work is being done by the military who, of course, primarily are interested in technologies that will neutralize an enemy. Pacifist robots – ones that won’t harm humans – probably aren’t in any military organization’s R&D[19] budget.

So could private industry do this kind of work? That is, develop a software conscience for intelligent machines, so that they don’t turn on their human creators? Certainly that could be a selling point for some products. Someone who buys an intelligent, self-driven car, for example, probably would like to know that the vehicle won’t go homicidal. Asimov’s Three Laws are tailor-made for that situation.

Nevertheless Fred wouldn’t rely on private industry to do this kind of work unless somebody paid them to do it. Modern companies don’t have a budget for pure research. Their focus is too short-term, on the next quarterly earnings report; not on the long view. He would rather enlist the aid of Bill Gates, Warren Buffett or other proven philanthropists to fund and oversee the work that needs to be done. Fred says he would be happy to set up and manage such a program, if somebody wants to support it.

Fred’s ideas often are weird, but he has a winner here. In my humble opinion.

[1] Ray Kurzweil is an inventor, computer scientist, author and general commentator on things cybernetic. Wikipedia has an entry on him at https://en.wikipedia.org/wiki/Ray_Kurzweil . The quote is from Brainy Quote at https://www.brainyquote.com/quotes/quotes/r/raykurzwei591137.html?src=t_artificial_intelligence

[2] Probably not. Check out Wikipedia at https://en.wikipedia.org/wiki/Atoms_for_Peace  for a refresher.

[3] Want to know more? Check out the Wikipedia article on the Chernobyl disaster, at https://en.wikipedia.org/wiki/Chernobyl_disaster

[4] Wikipedia does a fairly good job on this as well. Its article is at https://en.wikipedia.org/wiki/Fukushima_Daiichi_nuclear_disaster

[5] Oh, look: fancy words! A “genre” is a specific category of music, film or writing. Today science fiction is recognized as a “genre.” See, e.g., https://www.vocabulary.com/dictionary/genre .  A “meme” is basically a story-line.

[6] See Daily Mail, Zolfagharifard , Artificial intelligence ‘could be the worst thing to happen to humanity’: Stephen Hawking warns that rise of robots may be disastrous for mankind ( 2 May 2014), available at http://www.dailymail.co.uk/sciencetech/article-2618434/Artificial-intelligence-worst-thing-happen-humanity-Stephen-Hawking-warns-rise-robots-disastrous-mankind.html

[7] See, e.g., Sunday Express, Dassanayake, Bill Gates joins Stephen Hawking in warning Artificial Intelligence IS a threat to mankind (Jan. 29, 2015 ) available at http://www.express.co.uk/news/world/555092/Bill-Gates-Stephen-Hawking-Artificial-Intelligence-AI-threat-mankind

[8] Id.

[9] The first book in the series is called I Robot. Actually it’s a series of short stories, not a novel. The next three are novels. If you want to know more, you can always read the books. Otherwise check out Wikipedia at https://en.wikipedia.org/wiki/Robot_series_(Asimov) . It’s more or less correct.

[10] Wikipedia also discusses the Three Laws.  See https://en.wikipedia.org/wiki/Three_Laws_of_Robotics.   The “laws” are: (i) A robot may not injure a human being or, through inaction, allow a human being to come to harm. (ii) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. (iii) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.”

[11] DARPA, Rondeau, Adaptive RF Technology (ART), available at http://www.darpa.mil/program/adaptive-rf-technologies  “ART-enabled “cognitive” radios would be able to reconfigure themselves to operate in any frequency band with any modulation and for multiple access specifications under a range of environmental and operating conditions.”

[12] DARPA,  “Mayhem” Declared Preliminary Winner of Historic Grand Cyber Challenge (8/4/2016) available at http://www.darpa.mil/news-events/2016-08-04

[13]See DARPA, DARPA Goes “Meta” with Machine Learning for Machine Learning (6/17/2016), available at http://www.darpa.mil/news-events/2016-06-17  .

[14] DARPA, DARPA Provides Mobius Bionics LUKE Arms to Walter Reed (12/22/2016),   available at http://www.darpa.mil/news-events/2016-12-22

[15] DARPA, Neuroscience of Touch Supports Improved Robotic and Prosthetic Interfaces (10/26/2016), available at http://www.darpa.mil/news-events/2016-10-26

[16] DARPA, DARPA Helps Paralyzed Man Feel Again Using a Brain-Controlled Robotic Arm (10/13/2016), available at http://www.darpa.mil/news-events/2016-10-13

[17] DARPA, Living Structural Materials Could Open New Horizons for Engineers and Architects (8/5/016), available at http://www.darpa.mil/news-events/2016-08-05

[18] Actually we did a blog on that a while ago. See the blog of 2016/09/07, Autonomous Weapons, available at https://opsrus.wordpress.com/2016/09/07/1241/ .

[19] That’s Research & Development, sometimes also known as RDT&E {Research, Development, Test and Evaluation].

Advertisements