I’m interested in the implications of Arthur C Clarke’s third law ‘any sufficiently advanced technology is indistinguishable from magic’. Do you know of any philosophers who have addressed this? Can you recommend some reading? Thanks for your time.
Answer by Craig Skinner
Clarke’s laws appeared in his 1960s/ 1970s writings, although the third law appears in others’ writing at least as far back as 1866 (Rider Haggard).
Few people in our present culture explain things by magic. Even small children know that texting, drones, Google and so forth are wonders of science not magic. But no doubt if these were shown to a past (or present) culture prone to magical or supernatural explanations, the latter would be invoked to account for the marvels.
The key live philosophical discussion on advanced technology is the possible explosive increase in artificial intelligence (AI) and alleged risk of humanity being eclipsed. The story goes as follows. AI is steadily improving. Soon (years or decades) human level intelligence will be reached. This will be a no-way-back ‘singularity’ because these AIs will create even smarter robots, which, in turn, will produce yet smarter ones, quickly leaving homo sapiens behind, so that the activities of these superintelligences will be as opaque to us as quantum mechanics is to my dog, and they may have no place for us in their worldview, and eliminate us.
Discussion centres on whether this is possible, whether we could still be in control, whether we need to be proactive, whether morality could be built in to AI, whether the singularity has already happened and we are all simulations in an advanced AI’s virtual world.
Now to recommended reading. Much is not easy going, but it’s all great fun.
Recent debate was kick-started by David Chalmers’ 2010 Journal of Consciousness Studies (JCS) article ‘The Singularity, a Philosophical Analysis’ (I think it’s available online). This was followed by two 2012 JCS issues (Vol 19, No.1-2 and No.7-8) with numerous responses to Chalmers including those of Dan Dennett, Barry Dainton, Susan Blackmore, Susan Greenfield, Frank Tipler, Igor Aleksander, Ray Kurzweil, and Nick Bostrom. The latter has followed up with a terrific book, Superintelligence: Paths, Dangers, Strategies (OUP 2014).