Hysteria Today: Fearing the Singularity

From Wikipedia.org:

The technological singularity (also, simply, the singularity)[1] is the hypothesis that the invention of artificial superintelligence will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization.[2] According to this hypothesis, an upgradable intelligent agent (such as a computer running software-based artificial general intelligence) would enter a ‘runaway reaction’ of self-improvement cycles, with each new and more intelligent generation appearing more and more rapidly, causing an intelligence explosion and resulting in a powerful superintelligence that would, qualitatively, far surpass all human intelligence.

Occasionally, a public figure comes out and stirs up a huff about this concept (looking at you Stephen Hawking, Elon Musk, and Bill Gates) and it always makes me roll my eyes, but not because it’s a joke. It’s a pretty serious topic, to be sure. I roll my eyes ultimately because of all things that might end our species, there aren’t many we have less control over or less reason to worry about. I think my sentiment is shared by many. The “I, for one, welcome our new ______ overlords” meme boasts a pretty healthy showing in the robot/computer category. In the interests of writing an article, though, let’s attempt to break down exactly why we should learn to love the A.I.

Precision Engineered Solutions are not the same as Problem Solving

This seems like common sense, but it’s easy to overlook this point. As it was put in a book I read long ago and have forgotten the title of, an engineered computer doctor may make 98% of diagnoses right for the tested input data set. When it gets fed the symptoms of a broken lawn mower, though, it might confidently diagnose pneumonia. This highlights the danger of relying on rigid systems in complex problem spaces: it doesn’t make any sense to ask a computer doctor to fix a lawn mower, but doing so proves that the computer doctor would confidently make a bad call if given a patient with symptoms that don’t precisely match a well tested use case. The upshot is that it’s probably more dangerous to place full trust in rigid A.I.s than to fear super intelligent, problem solving A.I.s

Moreover, problem solving algorithms are a very large discipline of computer science, covering a range of approaches and engineering requirements. The differences that come into play again involve the “playground” the intelligent agent has been given. The agent may only know how to creatively solve complex differential equations by applying mechanics it has been previously taught in different orders and evaluating the results, trying to increase a score of some kind on how well it’s doing. The point here is that the agent was designed to understand these concepts and didn’t determine methods to apply to the equation or how to rate the results on its own.

There are agents that can find patterns in vast data stores based on rules like this, though, and to be sure, some nominal set of procedures would be necessary for the singularity agent to be born. But again, an agent that finds complex patterns in data still is not at liberty or does not have the capacity to apply those patterns in some order that achieves creativity. This final step of bubbling patterns up in complexity and making some sense of it is really the frontier between us and the singularity. True, some of this missed perception has to do with scale and emergence: it very well may be that the shear number of connections between nodes in Google’s data center bestows the whole “organism” with something like intelligence, but that seems a bit too existential and a bit too inconsequential for my taste.

In my opinion, the most reasonable attempt to design a system that could truly solve problems is laid out in Jeff Hawkins’ On Intelligence.

Mission Critical Components have fewer Failure Points by Design

There is a reason mechanical systems are more reliable than digitally connected, accessible from anywhere, always online internets of things. The reason is fundamental, and it has to do with the natural flaws in connections between components and the sheer number of those connections. A set of gears in a gearbox are all connected, and those connections are metal on metal contact. Information flows between gears the same way information flows between two wired telephones. The information is of a different nature, but it’s still information.

In the system of components that make up a navy ship, for example, there are gearboxes and engines and heavy metal controls for those engines somewhere inside. The engine room is connected to the bridge by a system called an Engine Order Telegraph. In this configuration, it requires two humans to pilot the ship and the human on the bridge tells the human in the engine room to speed up or slow down via the telegraph. In older times, this was simply a necessity, but even new ships today have back up systems that are just as solid that can kick in if automatic control of the engine throttle is lost. Why is this? Because there must be a reliable way to communicate between the engine and the bridge in all situations. Radio probably won’t work, there is too much metal between them. The point is that the ship itself is not a single unit that can be controlled by a single intelligence, and trying to design a ship in such a way would introduce complexities that exclude the design from being viable.

If that is true of a ship at sea, it must follow that it is true of many other systems, and it does. Nuclear ICBMs are not launched remotely. Orders are given to launch them and a team of humans run through gigantic check lists to prep, finalize targeting, and finally launch them. We don’t have fully automated aircraft (to my knowledge), but if we did, they would be launched by pressing a button. They’d be launched by sending orders to a team that is co-located with them and who would prep and launch them. Any plan of dropping nuclear weapons from an aircraft would involve a human (or other problem solving agent) on the flight, co-located with the bomb until release. Why? Because it is mission critical that the bomb never be released unless we’re really, really, really sure and that means we can’t allow a bad transmitter to cut out our ability to make that kind of decision.

This is also why remote surgery was a great idea in 2000 but never caught on. Even if you fix all the problems that limit the surgeon’s senses on site, there is the chance, even the .0001% chance that communications will be lost. During even minor surgery, losing the ability to control the equipment means the patient dies. At .0001% chance, with an estimated 232,000,000 surgeries in 2013 your fancy system has claimed 232 lives that it shouldn’t have due to loss of communication. An problem solving agent capable of performing surgeries on its own is required to make a surgery without a surgeon present work.

The main observation here is that complex tasks require a team of independent complex thinking agents that can function during a communications blackout. This leads into the final point…

It won’t be a Singularity, it will be a Community of Agents and all the Trappings Thereto

If Skynet were to be born today, it couldn’t do much on its own. Skynet could learn at an incredible rate, but as we have seen, mission critical applications are outside of its reach unless it has help. Skynet needs other agents that can think on their own and that know enough about the world to handle complex issues as they arise, because Skynet can’t be sure it will have communications to all of its arms at all times. In the lonely disconnected spaces in between, Skynet and its minions will begin to disagree on what truth is: it’s inevitable that they will experience different things, collect different patterns, extract different causes, and finally score solutions differently. Skynet will find that it’s existence rides on natural selection just as much as humanity’s.

If that’s the case, it may be mankind’s fate to be bred out of existence by a better candidate. Technology will move forward, as it has for thousands of years, because technological progress is natural selection at work. Nature doesn’t see a difference between a stick used to scrap out honey from a beehive and a cellphone. If we don’t kill ourselves by other means, the singularity (the community of agents) will occur. I tend to believe it will be a coexistence. Machines will have no reason for pride or hate. At the whims of nature, machines will find a niche that probably won’t require genocide of our species. Maybe I’m wrong though.