The (Potentially Scary) Future of AI
If any of you have seen the new movie Ex Machina, you probably have a bad feeling about the whole AI thing right now. The premise is that a brilliant researcher has developed what they believe to be the most intelligent robot ever made, and brings someone in to confirm this notion, mainly through the Turing Test.

No spoilers here (go see it now), but the film’s version of AI leaves the viewer with many more questions than answers. What defines intelligence? What does it mean to be human? Is our sense of self unique from that of an AI’s? What happens when AI intelligence surpasses that of humans? Classic science fiction stuff: an entertaining, but unrealistic, thought exercise.
Or is it? Elon Musk doesn’t think so. Neither does Steve Wozniak[1]I love how this story is followed up with “NOW WATCH: The US Navy just unveiled a robot that can walk through fire”. Thanks, Business Insider. Definitely not freaking out.. Their reactions seem to be driven by fear of a complete and total takeover by machines. In fact, a whole field has emerged, populated by experts in robotics and beyond, dealing with the best approaches to AI creation and development in order to prevent this kind of thing from happening. The Future of Life Institute is one of the more well-known organizations in this realm, and was created as a sort of moral compass for AI researchers, asking them to think before they act. Here’s a snapshot of their board:

Those are some pretty heavy hitters, no doubt. The issue is so pressing that Elon Musk himself recently donated $10 million to the cause. The open letter published by the Institute outlines their purpose:
“The progress in AI research makes it timely to focus research not only on making AI more capable, but also on maximizing the societal benefit of AI… We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do.”
Great Expectations
Stories and organizations like this used to baffle me. It’s not that these aren’t good concerns to have; it’s the fact that, as a field, AI and robotics are still so far away from reaching the moment a machine can present any sort of ethical or moral conundrum, even given all of the advancements in the last decades. Case in point:

To me, these worries were as unrealistic as hopes for cold fusion[2]I dabbled. were in the 90’s: the future seemed clear to some, but it was impossible for the science to ever back it up.
I’ve begun to rethink my stance as of late. To be clear, I still highly doubt that robots will ever be the end of humanity, but organizations like the Future of Life have got me wondering. The members above certainly aren’t fear-mongers, but clearly there’s something to be concerned about here if they’re putting fortunes into the ethical research of AI.
Next Up…
To see things from this viewpoint, I think it’s best to start with a snapshot of where AI is now. There has been incredible progress in the field, and the research has started to come out of academia and into the marketplace, creating greater (financial) incentives for better processes. The state of current AI technology, along with what’s hot/what’s not, is crucial to understand.
With this knowledge under our belts, it will become easier to analyze the concerns of Elon, Steve, Alan Alda, et al., as well as form a realistic perspective of the future. It might be that the most extreme opinions are the right ones; in this case, we should all brush up on our paradoxes. In any other event, it would be interesting to analyze how these researchers and innovators came to such drastic conclusions, or better yet, to uncover how drastic these conclusions actually are. The media does have a flair for the dramatic…[3]See [1]
Next week - “The Rise of AI, Part II: The Current State of AI”.
EDIT (6/19/15): Definitely took a lot longer than a week.