Blog to Explore the Legal & Ethical Aspects of AI
Deep-seated fears of technology have long been a subject that has repeatedly surfaced in the imaginative realm of sci-fi writers--from Asimov's iRobot to Star Trek's Ulimate Computer, The Terminator's Skynet, The Matrix's simulated reality, Battlestar Gallactica's cylons, Robocop's cybogs, and Transcendant's uploaded consciousness, just to name a few.
But as Stephen Hawking once remarked, "Today's science fiction is often tomorrow's scientific fact."
Many believe it is inevitable that we will, by mid-century, develop machines endowed with a form of self-improving artificial general intelligence. Once AGI is achieved, it may only be a matter of weeks before it explodes into a kind of superintelligence, an automomous machina sapiens, which could not only manipulate us socially, but control matter down to the nanotech level.
Such machines, to the extent their objectives are inconsistent with human values, have the potential to become indiscriminately destructive--just like in the movies. But unlike in the happy denoument of the films and stories, humanity, in real life, might just lose in the end.
This blog will explore the nature of the intelligence powering these machines, consider the risks and rewards of building them, and ask how, if necessary, we may endow them with a system of laws or values--an "artificial morality"--that will enable us to deploy and use them safely. In short, it is hoped these pages will help address, among other things, what has become known as the Friendly AI problem.
While that problem is being addressed by academics and entrepreneurs working with the tools of mathematics and engineering, it is submitted that such work cannot address the risks associated with machine intelligence without the equally important tools of law and philosophy.
To this end, we begin a new blog regarding the legal and ethical aspects of AI development. We hope to connect with others in the field and look forward to hearing the perspectives of our readers.