In this episode (recorded 9/27/17), we interview Dr. Shahar Avin of the University of Cambridge’s Centre for the Study of Existential Risk (CSER).

We discuss the prospects for the development of artificial general intelligence; why general intelligence might be harder to control than narrow intelligence; how we can forecast the development of new, unprecedented technologies; what the greatest threats to human survival are; the “value-alignment problem” and why developing AI might be dangerous; what form AI is likely to take; recursive self-improvement and… 193 more words