top of page
Search
  • Bob Kohn

Daniel Dewey on Intelligence Explosion


Daniel Dewey, a research fellow on Machine Superintelligence and the Future of AI at Oxford University's Future of Humanity Institute, has made available the slides for his recent talk entitled, What to do about intelligence explosion? (For background, you might wish to view Mr. Dewey's 15-minute, 2013 Ted Talk in which he provides an overview of the intelligence explosion problem.

Dewey stands with Nick Bostrom (Oxford), Eliezer Yudowsky (MIRI), and Stephen Omohundro (UC Berkeley), among others, as one of the foremost researchers in the field of what has been called, "Friendly AI."

Professor Dewey's recent slides provide a glimpse of his latest thinking on AI risks. Though it is hard to discern the precise scope of his suggestions without the talk itself, here are a few preliminary comments, which are not meant as criticism--it is understood that Mr. Dewey is proposing an outline of an appropriate conversation, rather than specific solutions:

Beginning on slide 8 and continuing through slide 11, Prof. Dewey begins discuss an "Option A," entitled Future Solutions.

Slide 8: Safety Engineering, the objective being to reduce catastrophic risks to acceptable levels.

Dewey appears here to suggest a term of art about which builders of AI should be familiar: safety engineering. This is something that builders of bridges, houses, and potentially dangerous tools (e.g., chain saws) know all about. So should AI engineers.

Slide 9: Regulation of (some kinds of) AI through centralization, control or otherwise.

It is not clear whether Dewey is proposing outright government regulation, but even so, it would seem an international treaty, with credible enforcement mechanisms, would ultimately be necessary. Clearly, an AI arms race is already underway. The key players would have to include the U.S., EU, China, Japan, India, and other technologically advanced jurisdictions. The slide provides no clear outline of what such regulations would entail, even if only implemented locally.

Slide 10: Radical solutions: Extreme regulation (surveillance) or Controlled, humane-valued explosion.

It is not clear from the slides what he means by "humane-valued" explosion. Assuming we can discover what that is, it would still require a means to require everyone to use it. Perhaps he should have used the conjunctive "and" in place of the disjunctive "or."

Slide 11: ? (Yes, the title of the slide is actually a question mark).

In this slide, the professor suggests we lack sufficient information about intelligence explosion, future AI, and future society. I suppose it is useful to acknowledge what we don't know, what we do know, and what we can surmise, and then proceed from there.

Option B: Act incrementally to avoid accidental misuse.

This appears supportive of the safe-AI scaffolding strategy proposed by Steve Omohundro (about 45:00 min. into his 2012 talk at the Winter Intelligence conference at Oxford). Under this strategy, we should assure that the intelligence we are developing--using some form of safety infrastructure--is safe at each stage of development before proceeding to the next. Omohundro does suggest enforcement by a limited form of surveillance.

Dewey uses climate science by analogy, drawing a number of conclusions: (i) non-solution-proposing work can be valuable, (ii) part-time academic work can be critical, (iii) simplistic models have long-term value, and (iv) the process may take tens to hundreds of years.

His first three conclusions make a lot of sense. Non-solution proposing work seems to be exactly what Dewey is doing. As to simplisitic models, I can't think of any model more simplistic, yet more durable, as Asimov's Three Laws of Robotics (though they are not mentioned on the slides). The work of the sci-fi writers certainly add some value to the discussion and, since we don't appear to fully understand the nature of the intelligences we're dealing with, it should be obvious that some discussion is better than no discussion. But let's hope we can progress to some detailed solutions before it's too late.

At the end of the day, Dewey suggests we act incrementally by improving our "scientific knowledge" of the problem so that those working on building AI will be well-informed of the risks, be willing to coordinate, and will have the technical knowledge necessary to avoid those risks. His use of the term "scientific" here is a bit constraining, but I plan to discuss that point in a forthcoming post.

In any event, Dewey's evangelism on Friendly AI is a nice complement to his serious papers on the subject. He's a man worth following.

Dewey Deck Image 5-10-2014 2-19-00 PM.jpg
7 views0 comments

Recent Posts

See All
bottom of page