[D] Eric Drexler’s “Reframing Superintelligence”
Following the Slate Star Codex review of “Reframing Superintelligence” I (as an AI researcher) have become pretty excited to see such a comprehensive reply exists to Bostrom-type “paperclip maximizer” fears of AGI. A good summary here – Less Like Us: An Alternate Theory of Artificial General Intelligence – basically the idea is that realistically AI is not developed with the ability to self improve and do whatever it wants, so we should not fear AGIs that get out of control in this way.
What do you think of this reply to AGI concerns? Certainly given present day AI and how it is developing, the “service ai’ seems like a cogent prediction of what we can actually say is likely to come about and we need to be wary of doing wrong.
submitted by /u/regalalgorithm
[link] [comments]