This is another essay I wrote for a class at Maryland.
INTRODUCTION
Understanding and emulating human intelligence has been the target of artificial intelligence researchers for a long time now. However, human-level artificial intelligence is not the final destination. Most researchers seem to think that in the next few decades we will start developing technologies that improve upon human intelligence. This could either happen by increasing human intelligence or creating artificial intelligence which surpasses human intelligence. This will lead to a positive feedback loop where improving intelligence will lead to more technology which improves intelligence further. The mathematician, I. J. Good called this process “intelligence explosion” [5]. This means that even a small improvement in intelligence will lead to immense changes within a short period. This event is called a technological singularity (or the Singularity, here). Singularity is an event where the runaway intelligence growth far surpasses any human comprehension or control. Ray Kurzweil defines the Singularity as a future period during which the pace of technological change will be so rapid, its impacts so deep, that human life will be irreversibly transformed [6].
An agent that possesses intelligence far surpassing that of the brightest and the most gifted humans [1] is called a superintelligence. Many philosophers and AI researchers believe that once we achieve superintelligence, the singularity is not far behind [4; 7]. And they believe that we are not far from achieving superintelligence. This raises questions about what such a superintelligence might do. Some people believe that this raises major existential risks for humans [4]. Others think that this will be extremely useful for humans [6]. However, everyone agrees that the Singularity is an event which will change/end the way we live. Vernor Vinge says that the change will be comparable to the rise of human life on Earth [7]. Eliezer Yudkowsky believes that the next few decades could determine the future of intelligent life. He says that superintelligence is the single most important issue in the world right now [2]. I.J. Good wrote - “The first ultraintelligent machine is the last invention that man need ever make” [5].
In this essay, I will present some paths that might lead to superintelligence, and hence, the Singularity. I will also discuss the ways in which such an agent might affect human lives and some steps to be taken to avoid the “major existential risks”.
PATHS TO SUPERINTELLIGENCE
There are several ways through which superintelligence could be achieved. It is extremely difficult to predict exactly which one will ultimately lead to superintelligence. However, most researchers believe that some combination of the following is likely to be the reason [4; 7; 8; 6; 2].
Artificial superintelligence
In this scenario, humans will create an artificial intelligence matching human intelligence. But, since an AI operates at much higher speeds than humans, it will be able to rewrite its own source code and create higher intelligence within a very short time leading to an intelligence explosion.
Biomedical improvements
Humans will increase their intelligence by enhancing the functioning of our biological brains. This could be achieved, for example, through drugs, selective breeding, or manipulation of genes. Such cognitive enhancements will accelerate science and technology. This will enable humans to increase their intelligence further. Higher cognitive capabilities will also enable humans to understand their own brains better and thus build a superintelligent AI.
Brain-to-computer interfaces
We will be able to build technology that can directly interface with human brains. This means that we will achieve intelligence amplification through brain-machine interface. There will be no difference between man and machine. They will become a single entity.
Networks
Networks and organisations that link humans with one another will become sufficiently efficient to be considered a superhuman being. This is an example of a collective superintelligence. Such a network will be efficient in the sense that the barriers to communication are reduced or removed. All of humanity will become one superintelligent being.
IMPACT ON INTELLIGENT LIFE
Regardless of how science achieves superintelligence, its impact on intelligent life will be immense. This will be an event similar to the origin of human life on Earth [7]. What will a superintelligent being do? This is an important question. It is also unanswerable before a superintelligence actually emerges. Unlike human intelligence, the space of all possible superintelligences is vast [2]. Yudkowski says that the impact of the intelligence explosion depends on exactly what kind of minds go through the tipping point [2]. Vinge argues that what the superintelligence will do is absolutely unpredictable [7]. You have to be as intelligent as the superintelligence to understand its motivations and actions. On the other hand, Kurzweil believes that technological developments follow typically smooth exponential curves and thus we can predict the arrival of new technology and its impacts [6; 3]. (He makes several such predictions in his book, which I will discuss in a bit.)
Given all of this, there are two main camps of thought about the future: the pessimists and the optimists. The first camp believes that the development of a superintelligence poses a major existential crisis [4]. Bostrom argues that an intelligence explosion will not give us time to adapt. Once someone finds one of the several keys to creating a superintelligence, we will have anywhere from a few hours to a few weeks till it achieves complete world dominance. This is not enough to form strategies for dealing with such a dramatic change. He believes that the default outcome of this event is doom. The first such system will quickly achieve a decisive strategic advantage and become a singleton eliminating all competing superintelligent systems. Even if programmed with a goal to serve humanity, such an agent might have a convergent instrumental reason to eliminate threats to itself. It might consider the same humans it is supposed to serve as hindrances in achieving its goals. The pessimist camp says that there are several malignant failure modes for a superintelligent system. The agent might find some way of satisfying its final goals which violates the intentions of the programmers who defined the goals. Or the agent might transform large parts of the universe into infrastructure needed to satisfy its goals. This will prevent humanity from realising its ”full axio logical potential” [4]. Bostrom also argues that controlling such an agent is almost impossible.
On the other hand, the optimists believe that development of a superintelligence will be beneficial for humanity. Ray Kurzweil says - “The Singularity will allow us to transcend the limitations of our biological bodies and brains. We will be able to live long (as long as we want)...fully understand human thinking and will vastly extend and expand its reach”. He believes that the Singularity will be achieved through brain-machine interface. He envisions a world that is still human but that transcends our biological roots. In his world, there will be no distinction between brain and machine or physical and virtual reality. Kurzweil says that the intelligence will still represent the human civilization. Others in the optimist camp believe that the superintelligent agents will be benevolent gods. Such agents can develop cures for currently incurable diseases, can crack the aging problem, and can find ways to eliminate all human suffering.
The impact of the Singularity is a very contentious issue. However, everyone agrees that it will be immense and the development of a superintelligence will be a world changing event. Such an event also raises moral and ethical issues. Should the superintelligent agent be given moral status? If so, how much? Should the agent be considered on par with humans? Or should it be given a higher moral status? These are important questions and have significant implications.
I believe that development of superintelligence represents the next level in evolution of intelligent beings. I think that if a truly superintelligent being is created, then it has every right to attain world dominance just like we do now. Such an agent might decide to eliminate humans or we might become that agent. But this should not stop us from trying to understand intelligence and build intelligent systems. However, we have to be absolutely sure that such an agent is superintelligent, i.e., is better than humans in all respects. Unless we are sure of that, we have to be extremely careful.
REFERENCES
[1] https://en.wikipedia.org/wiki/Superintelligence
[2] http://yudkowsky.net/singularity/intro/
[3] http://yudkowsky.net/singularity/schools/
[4] Nick Bostrom. Superintelligence: Paths, dangers, strategies. OUP Oxford, 2014.
[5] Irving John Good. Speculations concerning the first ultraintelligent machine. Advances in computers, 6:31–88, 1966.
[6] Ray Kurzweil. The singularity is near: When humans transcend biology. Penguin, 2005.
[7] Vernor Vinge. The coming technological singularity: How to survive in the post-human era. In Proceedings of a Symposium Vision-21: Interdisciplinary Science & Engineering in the Era of CyberSpace, held at NASA Lewis Research Center (NASA Conference Publication CP-10129).1993, 1993.
[8] Vernor Vinge. Signs of the singularity. IEEE Spectrum, 45(6), 2008.
INTRODUCTION
Understanding and emulating human intelligence has been the target of artificial intelligence researchers for a long time now. However, human-level artificial intelligence is not the final destination. Most researchers seem to think that in the next few decades we will start developing technologies that improve upon human intelligence. This could either happen by increasing human intelligence or creating artificial intelligence which surpasses human intelligence. This will lead to a positive feedback loop where improving intelligence will lead to more technology which improves intelligence further. The mathematician, I. J. Good called this process “intelligence explosion” [5]. This means that even a small improvement in intelligence will lead to immense changes within a short period. This event is called a technological singularity (or the Singularity, here). Singularity is an event where the runaway intelligence growth far surpasses any human comprehension or control. Ray Kurzweil defines the Singularity as a future period during which the pace of technological change will be so rapid, its impacts so deep, that human life will be irreversibly transformed [6].
An agent that possesses intelligence far surpassing that of the brightest and the most gifted humans [1] is called a superintelligence. Many philosophers and AI researchers believe that once we achieve superintelligence, the singularity is not far behind [4; 7]. And they believe that we are not far from achieving superintelligence. This raises questions about what such a superintelligence might do. Some people believe that this raises major existential risks for humans [4]. Others think that this will be extremely useful for humans [6]. However, everyone agrees that the Singularity is an event which will change/end the way we live. Vernor Vinge says that the change will be comparable to the rise of human life on Earth [7]. Eliezer Yudkowsky believes that the next few decades could determine the future of intelligent life. He says that superintelligence is the single most important issue in the world right now [2]. I.J. Good wrote - “The first ultraintelligent machine is the last invention that man need ever make” [5].
In this essay, I will present some paths that might lead to superintelligence, and hence, the Singularity. I will also discuss the ways in which such an agent might affect human lives and some steps to be taken to avoid the “major existential risks”.
PATHS TO SUPERINTELLIGENCE
There are several ways through which superintelligence could be achieved. It is extremely difficult to predict exactly which one will ultimately lead to superintelligence. However, most researchers believe that some combination of the following is likely to be the reason [4; 7; 8; 6; 2].
Artificial superintelligence
In this scenario, humans will create an artificial intelligence matching human intelligence. But, since an AI operates at much higher speeds than humans, it will be able to rewrite its own source code and create higher intelligence within a very short time leading to an intelligence explosion.
Biomedical improvements
Humans will increase their intelligence by enhancing the functioning of our biological brains. This could be achieved, for example, through drugs, selective breeding, or manipulation of genes. Such cognitive enhancements will accelerate science and technology. This will enable humans to increase their intelligence further. Higher cognitive capabilities will also enable humans to understand their own brains better and thus build a superintelligent AI.
Brain-to-computer interfaces
We will be able to build technology that can directly interface with human brains. This means that we will achieve intelligence amplification through brain-machine interface. There will be no difference between man and machine. They will become a single entity.
Networks
Networks and organisations that link humans with one another will become sufficiently efficient to be considered a superhuman being. This is an example of a collective superintelligence. Such a network will be efficient in the sense that the barriers to communication are reduced or removed. All of humanity will become one superintelligent being.
IMPACT ON INTELLIGENT LIFE
Regardless of how science achieves superintelligence, its impact on intelligent life will be immense. This will be an event similar to the origin of human life on Earth [7]. What will a superintelligent being do? This is an important question. It is also unanswerable before a superintelligence actually emerges. Unlike human intelligence, the space of all possible superintelligences is vast [2]. Yudkowski says that the impact of the intelligence explosion depends on exactly what kind of minds go through the tipping point [2]. Vinge argues that what the superintelligence will do is absolutely unpredictable [7]. You have to be as intelligent as the superintelligence to understand its motivations and actions. On the other hand, Kurzweil believes that technological developments follow typically smooth exponential curves and thus we can predict the arrival of new technology and its impacts [6; 3]. (He makes several such predictions in his book, which I will discuss in a bit.)
Given all of this, there are two main camps of thought about the future: the pessimists and the optimists. The first camp believes that the development of a superintelligence poses a major existential crisis [4]. Bostrom argues that an intelligence explosion will not give us time to adapt. Once someone finds one of the several keys to creating a superintelligence, we will have anywhere from a few hours to a few weeks till it achieves complete world dominance. This is not enough to form strategies for dealing with such a dramatic change. He believes that the default outcome of this event is doom. The first such system will quickly achieve a decisive strategic advantage and become a singleton eliminating all competing superintelligent systems. Even if programmed with a goal to serve humanity, such an agent might have a convergent instrumental reason to eliminate threats to itself. It might consider the same humans it is supposed to serve as hindrances in achieving its goals. The pessimist camp says that there are several malignant failure modes for a superintelligent system. The agent might find some way of satisfying its final goals which violates the intentions of the programmers who defined the goals. Or the agent might transform large parts of the universe into infrastructure needed to satisfy its goals. This will prevent humanity from realising its ”full axio logical potential” [4]. Bostrom also argues that controlling such an agent is almost impossible.
On the other hand, the optimists believe that development of a superintelligence will be beneficial for humanity. Ray Kurzweil says - “The Singularity will allow us to transcend the limitations of our biological bodies and brains. We will be able to live long (as long as we want)...fully understand human thinking and will vastly extend and expand its reach”. He believes that the Singularity will be achieved through brain-machine interface. He envisions a world that is still human but that transcends our biological roots. In his world, there will be no distinction between brain and machine or physical and virtual reality. Kurzweil says that the intelligence will still represent the human civilization. Others in the optimist camp believe that the superintelligent agents will be benevolent gods. Such agents can develop cures for currently incurable diseases, can crack the aging problem, and can find ways to eliminate all human suffering.
The impact of the Singularity is a very contentious issue. However, everyone agrees that it will be immense and the development of a superintelligence will be a world changing event. Such an event also raises moral and ethical issues. Should the superintelligent agent be given moral status? If so, how much? Should the agent be considered on par with humans? Or should it be given a higher moral status? These are important questions and have significant implications.
I believe that development of superintelligence represents the next level in evolution of intelligent beings. I think that if a truly superintelligent being is created, then it has every right to attain world dominance just like we do now. Such an agent might decide to eliminate humans or we might become that agent. But this should not stop us from trying to understand intelligence and build intelligent systems. However, we have to be absolutely sure that such an agent is superintelligent, i.e., is better than humans in all respects. Unless we are sure of that, we have to be extremely careful.
REFERENCES
[1] https://en.wikipedia.org/wiki/Superintelligence
[2] http://yudkowsky.net/singularity/intro/
[3] http://yudkowsky.net/singularity/schools/
[4] Nick Bostrom. Superintelligence: Paths, dangers, strategies. OUP Oxford, 2014.
[5] Irving John Good. Speculations concerning the first ultraintelligent machine. Advances in computers, 6:31–88, 1966.
[6] Ray Kurzweil. The singularity is near: When humans transcend biology. Penguin, 2005.
[7] Vernor Vinge. The coming technological singularity: How to survive in the post-human era. In Proceedings of a Symposium Vision-21: Interdisciplinary Science & Engineering in the Era of CyberSpace, held at NASA Lewis Research Center (NASA Conference Publication CP-10129).1993, 1993.
[8] Vernor Vinge. Signs of the singularity. IEEE Spectrum, 45(6), 2008.