Copyright ©Latestbook

Rationality: From AI to Zombies
0Authors : Eliezer Yudkowsky
ISBN10 : 1939311152 ISBN13 : 9781939311153
Genres : Philosophy,Nonfiction,Science,Psychology,Artificial Intelligence,Self Help,Unfinished,Technology,Logic
Language: English
Kindle Edition, 1813 pages
Published March 11th 2015 by Machine Intelligence Research Institute
Description
What does it actually mean to be rational? Not Hollywood-style "rational," where you forsake all human feeling to embrace Cold Hard Logic. Real rationality, of the sort studied by psychologists, social scientists, and mathematicians. The kind of rationality where you make good decisions, even when i......more
What does it actually mean to be rational? Not Hollywood-style "rational," where you forsake all human feeling to embrace Cold Hard Logic. Real rationality, of the sort studied by psychologists, social scientists, and mathematicians. The kind of rationality where you make good decisions, even when it's hard; where you reason well, even in the face of massive uncertainty; where you recognize and make full use of your fuzzy intuitions and emotions, rather than trying to discard them.
In "Rationality: From AI to Zombies," Eliezer Yudkowsky explains the science underlying human irrationality with a mix of fables, argumentative essays, and personal vignettes. These eye-opening accounts of how the mind works (and how, all too often, it doesn't!) are then put to the test through some genuinely difficult puzzles: computer scientists' debates about the future of artificial intelligence (AI), physicists' debates about the relationship between the quantum and classical worlds, philosophers' debates about the metaphysics of zombies and the nature of morality, and many more. In the process, "Rationality: From AI to Zombies" delves into the human significance of correct reasoning more deeply than you'll find in any conventional textbook on cognitive science or philosophy of mind.
A decision theorist and researcher at the Machine Intelligence Research Institute, Yudkowsky published earlier drafts of his writings to the websites Overcoming Bias and Less Wrong. "Rationality: From AI to Zombies" compiles six volumes of Yudkowsky's essays into a single electronic tome. Collectively, these sequences of linked essays serve as a rich and lively introduction to the science—and the art—of human rationality.(less)
COMMUNITY REVIEWS
About the author(Eliezer Yudkowsky)
From
:
Eliezer Shlomo Yudkowsky is an American artificial intelligence researcher concerned with the singularity and an advocate of friendly artificial intelligence, living in Redwood City, California.
Yudkowsky did not attend high school and is an autodidact with ......more
From
:
Eliezer Shlomo Yudkowsky is an American artificial intelligence researcher concerned with the singularity and an advocate of friendly artificial intelligence, living in Redwood City, California.
Yudkowsky did not attend high school and is an autodidact with no formal education in artificial intelligence. He co-founded the nonprofit Singularity Institute for Artificial Intelligence (SIAI) in 2000 and continues to be employed as a full-time Research Fellow there.
Yudkowsky's research focuses on Artificial Intelligence theory for self-understanding, self-modification, and recursive self-improvement (seed AI); and also on artificial-intelligence architectures and decision theories for stably benevolent motivational structures (Fri
From
:
Eliezer Shlomo Yudkowsky is an American artificial intelligence researcher concerned with the singularity and an advocate of friendly artificial intelligence, living in Redwood City, California.
Yudkowsky did not attend high school and is an autodidact with no formal education in artificial intelligence. He co-founded the nonprofit Singularity Institute for Artificial Intelligence (SIAI) in 2000 and continues to be employed as a full-time Research Fellow there.
Yudkowsky's research focuses on Artificial Intelligence theory for self-understanding, self-modification, and recursive self-improvement (seed AI); and also on artificial-intelligence architectures and decision theories for stably benevolent motivational structures (Friendly AI, and Coherent Extrapolated Volition in particular). Apart from his research work, Yudkowsky has written explanations of various philosophical topics in non-academic language, particularly on rationality, such as "An Intuitive Explanation of Bayes' Theorem".
Yudkowsky was, along with Robin Hanson, one of the principal contributors to the blog Overcoming Bias sponsored by the Future of Humanity Institute of Oxford University. In early 2009, he helped to found Less Wrong, a "community blog devoted to refining the art of human rationality". The Sequences on Less Wrong, comprising over two years of blog posts on epistemology, Artificial Intelligence, and metaethics, form the single largest bulk of Yudkowsky's writing. (less)