I came across a talk by philosopher Nick Bostrom, one of the smartest folks out there on topics such as the future of AI, virtual reality and the like. It is a bit of a scary talk when he gets to the part about AI expanding beyond human control but, at they end, he proposes as about the only solution something which will be a theme of my book: Teaching certain values to the AI so deeply within its programming that it will not act in ways contrary to human interest. He does not mention Buddhism in his talk, but my book will introduce Buddhist values as being very close to just such values, e.g, not harming fellow sentient beings (specifically, us), not allowing fellow sentient beings to suffer, and the like.
Gassho, J
STLah
Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University and founding Director of the Future of Humanity Institute and of the Programme on the Impacts of Future Technology within the Oxford Martin School.
Bostrom has a background in physics, computational neuroscience, and mathematical logic as well as philosophy. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), and Human Enhancement (ed., OUP, 2009), and the book Superintelligence: Paths, Dangers, Strategies (OUP, 2014). He is best known for his work in five areas: (i) existential risk; (ii) the simulation argument; (iii) anthropics (developing the first mathematically explicit theory of observation selection effects); (iv) impacts of future technology; and (v) implications of consequentialism for global strategy.
Bostrom has a background in physics, computational neuroscience, and mathematical logic as well as philosophy. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), and Human Enhancement (ed., OUP, 2009), and the book Superintelligence: Paths, Dangers, Strategies (OUP, 2014). He is best known for his work in five areas: (i) existential risk; (ii) the simulation argument; (iii) anthropics (developing the first mathematically explicit theory of observation selection effects); (iv) impacts of future technology; and (v) implications of consequentialism for global strategy.
STLah
Comment