Dr. Stuart Russell, a distinguished AI researcher and computer scientist at UC Berkeley, believes there is a fundamental and potentially civilization-ending shortcoming in the “standard model” of AI, which is taught (and Dr. Russell wrote the main textbook) and applied virtually everywhere. Dr. Russell’s new book, Human Compatible: Artificial Intelligence and the Problem of Control, argues that unless we re-think the building blocks of AI, the arrival of superhuman AI may become the “last event in human history.”
That may sound a bit wild-eyed, but Human Compatible is a carefully written explanation of the concepts underlying AI as well as the history of their development. If you want to understand how fast AI is developing and why the technology is so dangerous, Human Compatible is your guide, literally starting with Aristotle and closing with OpenAI Five’s Dota 2 triumph.
Stuart’s aim is help non-technologists grasp why AI systems must be designed not simply to fulfill “objectives” assigned to them, the so-called “Standard Model” in AI development today, but to operate so “that machines will necessarily defer to humans: they will ask permission, they will accept correction, and they will allow themselves to be switched off.”