Nick Bostrom (born 10 March 1973) is a Swedish philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, the reversal test, and consequentialism. In 2011, he founded the Oxford Martin Programme on the Impacts of Future Technology, and he is currently the founding director of the Future of Humanity Institute at Oxford University.
Nick Bostrom: On the Simulation Argument
Interview with Nick Bostrom at the Future of Humanity Institute Oxford University. He argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a "posthuman" stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed.
Nick Bostrom: On the Future of Machine Intelligence at USI
Machine learning is currently advancing at a rapid rate. We will look at some current capabilities, and consider some longer term prospects of artificial intelligence. The transition to the machine intelligence era is likely to have profound consequences for human society. We will also discuss some issues that arise when considering the possibility of machine superintelligence.
Nick Bostrom: On Superintelligence at Google
Superintelligence asks the questions: What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life. <br>The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful - possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence. <br>But we have one advantage: we get to make the first move. Will it be possible to construct a seed Artificial Intelligence, to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation? <br>This profoundly ambitious and original book breaks down a vast track of difficult intellectual terrain. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostrom's work nothing less than a reconceptualization of the essential task of our time.