Sign In
Aaron Levie Alain de Botton Alan Watts Alexis Ohanian Andreas Antonopoulos Andrew DeSantis Andrew Lo Arthur De Vany Aubrey de Grey Balaji Srinivasan Ben Bratton Bill Bryson Bryan Caplan Carl Jung Chamath Palihapitiya Chris Arnade Chris Sacca Daniel Dennett Daniel Kahneman David Linden David Sacks Derek Parfit Derek Sivers Douglas Hofstadter E.O. Wilson Elon Musk Eric Weinstein Ezra Klein Gad Saad George Gilder George Lakoff Ido Portal J.D. Vance James Altucher James Gleick Jason Silva Jeff Bezos Jim Simons Jocko Willink John Hagelin John Nash Jordan Peterson Josh Waitzkin Julia Galef Kelly Starrett Kevin Kelly Kevin Rose Kim Scott Kumar Thangudu Leonard Shlain Malcolm Gladwell Marc Andreessen Maria Konnikova Maria Popova Matt Ridley Michael Hiltzik Michael Sandel Naval Ravikant Neil Strauss Neil Turok Nick Bostrom Nick Szabo Noah Kagan Noam Chomsky Oliver Sacks P.D. Mangan Paul Bloom Paul Graham Peter Attia Peter Diamandis Peter Thiel Reid Hoffman Rhonda Patrick Richard Feynman Richard Rorty Robert Caro Robert Cialdini Robert Greene Robert Kurzban Robert Langer Robert McNamara Robert Putnam Robert Sapolsky Rory Sutherland Ryan Holiday Sam Altman Sam Harris Scott Adams Scott Belsky Scott Galloway Seth Godin Shawn Baker Shinzen Young Siddhartha Mukherjee Simon Sinek Slavoj Zizek Stephen Wolfram Steve Jobs Steven Pinker Stewart Butterfield Ted Nelson Tiago Forte Tim Ferriss Tim Urban Timothy Gowers Timothy Pychyl Tyler Cowen Vaclav Smil Valter Longo Venkatesh Rao Vinay Gupta Vincent Dignan Will MacAskill Wim Hof Yanis Varoufakis Yuval Harari
Nick Bostrom
Nick Bostrom (born 10 March 1973) is a Swedish philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, the reversal test, and consequentialism. In 2011, he founded the Oxford Martin Programme on the Impacts of Future Technology, and he is currently the founding director of the Future of Humanity Institute at Oxford University.
Nick Bostrom: On the Simulation Argument
23 minutes
MY LIST
Interview with Nick Bostrom at the Future of Humanity Institute Oxford University. He argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a "posthuman" stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed.
Nick Bostrom: On the Future of Machine Intelligence at USI
41 minutes
MY LIST
Machine learning is currently advancing at a rapid rate. We will look at some current capabilities, and consider some longer term prospects of artificial intelligence. The transition to the machine intelligence era is likely to have profound consequences for human society. We will also discuss some issues that arise when considering the possibility of machine superintelligence.
Nick Bostrom: On Superintelligence at Google
72 minutes
MY LIST
Superintelligence asks the questions: What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life. <br>The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful - possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence. <br>But we have one advantage: we get to make the first move. Will it be possible to construct a seed Artificial Intelligence, to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation? <br>This profoundly ambitious and original book breaks down a vast track of difficult intellectual terrain. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostrom's work nothing less than a reconceptualization of the essential task of our time.