Thursday, April 21, 2016

Part XI: Questioning Superintelligence as We Approach 'The Singularity'

Science fiction pioneer Isaac Asimov anticipated these concerns when he began writing about robots in the 1940s. He developed rules for robots, the first of which was: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” In 1965, British mathematician and code-breaker I.J. Good wrote, “An ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind.” In 1993, science fiction author Vernor Vinge used the term “the Singularity” to describe such a moment. Inventor and writer Ray Kurzweil ran with the idea, cranking out a series of books predicting the age of intelligent, spiritual machines.
Technology Review:
The question “Can a machine think?” has shadowed computer science from its beginnings. Alan Turing proposed in 1950 that a machine could be taught like a child; John McCarthy, inventor of the programming language LISP, coined the term “artificial intelligence” in 1955. As AI researchers in the 1960s and 1970s began to use computers to recognize images, translate between languages, and understand instructions in normal language and not just code, the idea that computers would eventually develop the ability to speak and think—and thus to do evil—bubbled into mainstream culture. Even beyond the oft-referenced HAL from 2001: A Space Odyssey, the 1970 movie Colossus: The Forbin Project featured a large blinking mainframe computer that brings the world to the brink of nuclear destruction; a similar theme was explored 13 years later in WarGames. The androids of 1973’s Westworld went crazy and started killing.

When AI research fell far short of its lofty goals, funding dried up to a trickle, beginning long “AI winters.” Even so, the torch of the intelligent machine was carried forth in the 1980s and ’90s by sci-fi authors like Vernor Vinge, who popularized the concept of the singularity; researchers like the roboticist Hans Moravec, an expert in computer vision; and the engineer/entrepreneur Ray Kurzweil, author of the 1999 book The Age of Spiritual Machines. Whereas Turing had posited a humanlike intelligence, Vinge, Moravec, and Kurzweil were thinking bigger: when a computer became capable of independently devising ways to achieve goals, it would very likely be capable of introspection—and thus able to modify its software and make itself more intelligent. In short order, such a computer would be able to design its own hardware.

As Kurzweil described it, this would begin a beautiful new era. Such machines would have the insight and patience (measured in picoseconds) to solve the outstanding problems of nanotechnology and spaceflight; they would improve the human condition and let us upload our consciousness into an immortal digital form. Intelligence would spread throughout the cosmos.

You can also find the exact opposite of such sunny optimism. Stephen Hawking has warned that because people would be unable to compete with an advanced AI, it “could spell the end of the human race.” Upon reading Superintelligence, the entrepreneur Elon Musk tweeted: “Hope we’re not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable.” Musk then followed with a $10 million grant to the Future of Life Institute. Not to be confused with Bostrom’s center, this is an organization that says it is “working to mitigate existential risks facing humanity,” the ones that could arise “from the development of human-level artificial intelligence.”

Even if the odds of a superintelligence arising are very long, perhaps it’s irresponsible to take the chance. One person who shares Bostrom’s concerns is Stuart J. Russell, a professor of computer science at the University of California, Berkeley. Russell is the author, with Peter Norvig (a peer of Kurzweil’s at Google), of Artificial Intelligence: A Modern Approach, which has been the standard AI textbook for two decades.

“There are a lot of supposedly smart public intellectuals who just haven’t a clue,” Russell told me. He pointed out that AI has advanced tremendously in the last decade, and that while the public might understand progress in terms of Moore’s Law (faster computers are doing more), in fact recent AI work has been fundamental, with techniques like deep learning laying the groundwork for computers that can automatically increase their understanding of the world around them.

Because Google, Facebook, and other companies are actively looking to create an intelligent, “learning” machine, he reasons, “I would say that one of the things we ought not to do is to press full steam ahead on building superintelligence without giving thought to the potential risks. It just seems a bit daft.” Russell made an analogy: “It’s like fusion research. If you ask a fusion researcher what they do, they say they work on containment. If you want unlimited energy you’d better contain the fusion reaction.” Similarly, he says, if you want unlimited intelligence, you’d better figure out how to align computers with human needs.

Bostrom’s book is a research proposal for doing so. A superintelligence would be godlike, but would it be animated by wrath or by love? It’s up to us (that is, the engineers). Like any parent, we must give our child a set of values. And not just any values, but those that are in the best interest of humanity. We’re basically telling a god how we’d like to be treated. How to proceed?

Bostrom draws heavily on an idea from a thinker named Eliezer Yudkowsky, who talks about “coherent extrapolated volition”—the consensus-derived “best self” of all people. AI would, we hope, wish to give us rich, happy, fulfilling lives: fix our sore backs and show us how to get to Mars. And since humans will never fully agree on anything, we’ll sometimes need it to decide for us—to make the best decisions for humanity as a whole. How, then, do we program those values into our (potential) superintelligences? What sort of mathematics can define them? These are the problems, Bostrom believes, that researchers should be solving now. Bostrom says it is “the essential task of our age.”

For the civilian, there’s no reason to lose sleep over scary robots. We have no technology that is remotely close to superintelligence. Then again, many of the largest corporations in the world are deeply invested in making their computers more intelligent; a true AI would give any one of these companies an unbelievable advantage. They also should be attuned to its potential downsides and figuring out how to avoid them.

This somewhat more nuanced suggestion—without any claims of a looming AI-mageddon—is the basis of an open letter on the website of the Future of Life Institute, the group that got Musk’s donation. Rather than warning of existential disaster, the letter calls for more research into reaping the benefits of AI “while avoiding potential pitfalls.” This letter is signed not just by AI outsiders such as Hawking, Musk, and Bostrom but also by prominent computer scientists (including Demis Hassabis, a top AI researcher). You can see where they’re coming from. After all, if they develop an artificial intelligence that doesn’t share the best human values, it will mean they weren’t smart enough to control their own creations.

“The future is ours to shape. I feel we are in a race that we need to win. It’s a race between the growing power of the technology and the growing wisdom we need to manage it. Right now, almost all the resources tend to go into growing the power of the tech,” said Max Tegmark, an MIT physics professor and founder of the Future of Life Institute.

io9:
Luke Muehlhauser is the Executive Director of the Machine Intelligence Research Institute (MIRI) — a group that's dedicated to figuring out the various ways we might be able to build friendly smarter-than-human intelligence. Recently, Muehlhauser coauthored a paper with the Future of Humanity Institute's Nick Bostrom on the need to develop friendly AI:

"Superintelligence experts — meaning, those who research the problem full-time, and are familiar with the accumulated evidence and arguments for and against various positions on the subject — have differing predictions about whether humanity is likely to solve the problem.

As for myself, I'm pretty pessimistic. The superintelligence control problem looks much harder to solve than, say, the global risks from global warming or synthetic biology, and I don't think our civilization's competence and rationality are improving quickly enough for us to be able to solve the problem before the first machine superintelligence is built. But this hypothesis, too, is one that can be studied to improve our predictions about it.

I get a similar shudder when I think of programming current human values into a machine superintelligence. So what we probably want is not a direct specification of values, but rather some algorithm for what's called indirect normativity. Rather than programming the AI with some list of ultimate values we're currently fond of, we instead program the AI with some process for learning what ultimate values it should have, before it starts reshaping the world according to those values. There are several abstract proposals for how we might do this, but they're at an early stage of development and need a lot more work."

More information:
» Huffington Post: "Transcending Complacency on Superintelligent Machines"
» Washington Post: "The A.I. Anxiety"
» Washington Post: 'Team Human', Digital Dissenters and the Technology Resistance
» An excerpt from Our Final Invention: Artificial Intelligence and the End of the Human Era
» IBM's Response to White House RFI on Cognitive Computing

No comments: