After the singularity scientist Will Caster (Johnny Depp) is shot by an anti-robot extremist, Caster's wife, Evelyn, manages to upload her dying husband's mind to the core of an existing artificially intelligent machine. There, the combined intelligences grow exponentially to the point where it learns to manipulate the world's power supplies, means of telecommunications, the FBI and, of course, his wife.
Evelyn -- persuaded by the machine, or out of love for her husband, or both -- helps (we'll say) him manipulate matter by organizing the construction of a technology bunker in the desert. She succeeds, and he thereupon builds nano-tech devices that enable him to manipulate matter without the need of further human intervention. Things get (spoiler-alert) heady, and the ending leaves you feeling like you've just watched Romeo & Juliet (without the feeling, the intellect, or the humanity).
This modern sci-fi techo-thriller, with roots dating back to Mary Shelley's Frankenstein, has been (a) harshly condemned (by Salon.com) as mere "techno-idiocy" with its "moronic stew" of "bad science meets bad sociology meets bad theology," (b) awkwardly panned (by the New York Times) as "implausible (maybe not)" while being "predictable and ridiculous," and (c) hesitantly approved (by The Guardian) as an "ambitious," if flawed, exploration of "grand ideas" about the "boundary between man and machine."
Verdict: All of the above.
To be fair, any film that asks a mature question such as, What makes us human and why?, is entitled to an automatic one Thumb's Up. Whether it earns the other depends upon whether it presents an answer, however lame, in an intelligent, if not convincing way. Unfortunately, this is where the film runs short of neurons.
The "implausible": Uploading a human mind to a machine still seems more like the wishful thinking of the singularity set than a real engineering problem. The audience does not buy into this technology one bit. Nor do I, and I'll use this blog to explain why at a later time.
But here's where the "maybe not" comes in: Can we use what we know about human cognition to construct the equivalent of the human mind or, at least, something that closely resembles one?
The answer is (if you replace the uploading with sequential probability theory): Yes, we probably can build something that resembles or outwardly behaves like human cognition. (Put tags on the words, "resembles" and "outwardly"). Indeed, we have already taken a few small steps in that direction: first with the chess playing Deep Blue, followed by the Jeopardy! winning Watson, and more recently with Google's self-driving cars.
At the end of the day, Ray Kurzweil is probably right on this score: It is highly plausible that during the next 20 years or so we will build a machine that exhibits what was once known as Strong AI, currently referred to as, artificial general intelligence (AGI). While the ultimate nature of AGI (e.g., whether it could ever reach human-level cognition) remains controversial, most would agree -- based on work pioneered by Peter Norvig and Marcus Hutter -- that our potential to build an AI architecture with remarkable cognitive abilities is quite real.
Could such a machine, like in the movies, use the net to manipulate our power supplies and means of communications? Yup. Could it manipulate matter using nanotechnology? Yup. Would it use the resources at its command the way the machine version of Will Caster did? Doh!
Here, at the very end, the film's implausibility was compounded. If the machine version of Will Caster had retained any credibility at all, it lost the rest due to an unfortunate anthropomorphizing. If the uploading of Will Caster's mind seemed implausible, asking the audience to believe the character's beliefs, values, and ethics went with him never had a chance.