Exploring the Social Science of Human-Autonomy Interaction

Weekend Reading: Fear of AI and Autonomy

In our inaugural post, I alluded to the current discussion surrounding AI/Autonomy as being dominated by philosophers, politicians, and engineers.  They are, of course, working at the forefront of this technology and raise important points.

But focusing on their big-picture concerns may prevent a fuller view of the day-to-day role of this technology, and the fact that humans are expected to interact, collaborate, and in some cases submit to these systems (social science issues; why this blog exists).

That said, one of the philosophers examining the future role and risk associated with AI is Nick Bostrom, director of the Future of Humanity Institute.  This profile from the New Yorker from a few years ago (2015) is a great way to get up to speed on the basis of much of the fear of AI.

Bostrom’s sole responsibility at Oxford is to direct an organization called the Future of Humanity Institute, which he founded ten years ago, with financial support from James Martin, a futurist and tech millionaire. Bostrom runs the institute as a kind of philosophical radar station: a bunker sending out navigational pulses into the haze of possible futures. Not long ago, an F.H.I. fellow studied the possibility of a “dark fire scenario,” a cosmic event that, he hypothesized, could occur under certain high-energy conditions: everyday matter mutating into dark matter, in a runaway process that could erase most of the known universe. (He concluded that it was highly unlikely.) Discussions at F.H.I. range from conventional philosophic topics, like the nature of compromise, to the optimal structure of space empires—whether a single intergalactic machine intelligence, supported by a vast array of probes, presents a more ethical future than a cosmic imperium housing millions of digital minds.

Warning: Settle-in because this is a typical New Yorker article (i.e., is very, satisfyingly long).

The similar-sounding Future of Life Institute has similar goals but is focused on the explaining the risks of AI but also dispelling myths.