Bostrom and Yudkowsky view intelligent systems through the lens of reinforcement learning — they view them as “reward-maximizers” and worry about what happens when a very powerful and intelligent reward-maximizer is paired with a goal system that gives rewards for achieving foolish goals, like tiling the universe with paperclips.
it is this apparent progress that has led science and technology luminaries like Elon Musk (Kumpara 2014), Stephen Hawking (Cellan-Jones 2014) and Bill Gates (Holley 2015) to publicly raise an alarm regarding the potential that one day not necessarily that far off, superhuman AIs might emanate from some research lab and literally annihilate the human race.
Often progress proceeds step by step for a while — and then some breakthrough happens, disrupting the state of the art and reorienting much of the field’s efforts toward exploring and leveraging the breakthrough. Kurzweil likes to point out that exponential progress curves are made of cascades of “S” curves, a new one starting after the previous one.
From Bostrom’s point of view, the Singularity optimists don’t need to be fully right in order for superintelligence to be a major ethical concern. As he sees it, if human-level AGI is created it’s very likely to lead fairly soon after to superintelligence; and if superintelligence is created without extremely particular conditions being met, then it’s very likely to lead to terrible outcomes like the extermination of all humans. Therefore if there’s even a 1 per cent chance that Singularity optimists are correct about the near advent of human-level AGI, there’s a major risk worthy paying careful attention to.
The extreme of biological imitation is whole brain emulation, or "uploading". This approach would involve creating a very detailed 3d map of an actual brain — showing neurons, synaptic interconnections, and other relevant detail — by scanning slices of it and generating an image using computer software. Using computational models of how the basic elements operate, the whole brain could then be emulated on a sufficiently capacious computer.
Artificial intelligence is already getting smarter than us, at an exponential rate.
We’ll make AIs into a general purpose intelligence, like our own.
We can make human intelligence in silicon.
Intelligence can be expanded without limit.
Once we have exploding superintelligence it can solve most of our problems.
Intelligence is not a single dimension, so “smarter than humans” is a meaningless concept.
Humans do not have general purpose minds, and neither will AIs.
Emulation of human thinking in other media will be constrained by cost.
Dimensions of intelligence are not infinite.
Intelligences are only one factor in progress.