Will Artificial Intelligence Mark the End of Humanity?
By
Justin C. Harris
Monica Swaner
English 102: Lesson 11
5 December 2014
Justin C Harris
Monica Swaner
English 102: Lesson 11
December 5, 2014
Will Artificial Intelligence Mark the End of Humanity?
What does the phrase Artificial Intelligence mean to you? For many, it may conjure up images of killer robot's trying to systematically destroy humanity as in the popular Terminator movie franchise. For others, A.I. could be a friendly android trying to learn about itself, humanity, and the world it lives in, as depicted by the Star Trek TV series. No doubt science fiction and it's various mediums have influenced our notions about what Artificial Intelligence is and what it could be. The wide spectrum of possibility that is associated with A.I. is indeed based on ignorance, but that is only because A.I., as of now does not exist in a meaningful way, it is not relevant to the average person in their daily life.
Whatever your current ideas are about Artificial Intelligence, my research indicates that A.I. is real, it's coming, and it has the potential to change humanity so profoundly that we may be unrecognizable to ourselves. The dream of A.I. is not simply about creating an autonomous machine that can do man's bidding. Artificial Intelligence is about creating a new consciousness, one that can think on it's own, learn on it's own, and draw it's own conclusions. This is cutting edge stuff, it's uncharted territory, and like the first human sailors, we don't know what monsters or riches we may happen upon. Through However, like any celestial navigator, we can't know where we're going if we don't know where we are. So let us look honestly and objectively at the world of A.I., as it is now, so that we may catch a glimpse of the future. Will Artificial Intelligence ascend humanity to the heights of our potential, or bring us to our knees?
The answer to this question is not so black and white, although it is often framed that way. A.I. is, in its purest form, technology. Like all technology exploited by humans, Artificial Intelligence can be used simultaneously, for our benefit or destruction.
As of now, Cold War era nuclear bombs are being repurposed to have their radioactive material used in nuclear power plants. Most would agree, this is great news! This is also a fitting analogy to A.I. The proliferation of atomic energy is a perfect example of the immense power of technology. The power to destroy the world and every living thing a hundred times over, and simultaneously the power to bathe all of humanity in light, warmth, and energy for thousands of years. In case of A.I., the situation can not be understated.
Artificial Intelligence is likely to come into existence, but it does not pose a realistic threat to humanity. While there is no clear indication that A.I. will be inherently dangerous, there are genuine concerns regarding the use and implementation of this technology. The implications of a higher intelligence interacting with the human species represents a new potential in human evolution.
One of the main arguments surrounding artificial intelligence is the question of it's existence in a meaningful way. The concept used to define this paradigm shift is called The Singularity. Put Simply The Singularity is the moment when artificial intelligence exceeds human intelligence. This is a moment at some time in the future, and because true artificial intelligence does not currently exist, some question if it is even possible. How can we know the future, there is no certainty about what advances we, the human species, will make in due time.
Yet, we can look at our current path, and we can look to where we have been to see where we might be headed, and we should consider that "Enormous progress has been made in the field of robotics since the 1980s, and only time will tell what scientists will be able to create over the next century." ("Artificial Intelligence.") Indeed, humanity has come along way from it's humble beginning. We have advanced much, we have harnessed the power of the atom, begun the exploration of our solar system, and for all of our achievements, our most significant accomplishments have occurred in the last two-hundred years of our one-hundred-thousand year existence. In a broad view, this denotes an acceleration in our progress, and we can make short term projections based on this analysis. Recently, "As of February 1, 2010, AI's Web site reports that a machine is able to communicate at the level of an eighteen-month-old child. The programmers at AI believe Hal (the machine) will be able to communicate at an adult level in the next ten years." ("Artificial Intelligence.")
This acceleration is an observation, and has given rise to concepts such as Moore's Law, the principle that states the transistors in an integrated circuit double approximately every two years. Based on this principle, we can extrapolate it's relevance in our society and economy. A strong advocate of A.I., computer scientist and inventor, Ray Kurzweil, has proposed a concept called the law of accelerating returns where he states, "...Each technology project and contributor is unpredictable, yet the overall trajectory as quantified by basic measures of price-performance and capacity nonetheless follow remarkably predictable paths." ("Kurzweil Responds")
Dissenting voices would argue that we are too far away from a working model of A.I. to know if it's possible, and that our current lack of understanding about our own minds creates a paradox in our hopes to create another intelligent being. While these criticisms have merit, there is no controversy about the fact that computer technology will continue to increase in capacity and decrease in size at an accelerating rate. As engineers and inventors continue to push the envelope science and technology, they work diligently with a focused effort, and from those efforts it is likely that we will one day marvel when "Eventually the AI becomes sophisticated enough to start improving itself-not just small improvements but improvements large enough to cascade into other improvements ... and the AI leaves our human abilities far behind." ("Artificial Intelligence Will Exceed Human Intelligence.")
The day the singularity comes, will mark a turning point for humanity, and what can we expect from this turning point. Theres are so many possibilities, for good or ill. As we have alluded to history previously, history has certainly shown us that the future is not always what we expect, but what can we expect from A.I. once it has come into fruition? Primarily we expect it to help us solve our problems, that is the fundamental purpose behind creating it. However, there is a paradox in creating a superior intelligence to surpass our limitations, the question is how can we possibly hope to control a superior intelligence? The development of A.I. has been, and will continue to be a gradual process, it's development will be suited to our needs. The design of A.I. is critical to understanding its purpose, and it's design will be reliant on interaction with human beings. As such, the need to develop safeguards for that interaction will begin early in its construction, and persist through its advancement. "As machines with limited autonomy operate ... in open environments, it becomes increasingly important to design a kind of functional morality that is sensitive to ethically relevant features of [given] situations." ("Engineering Morality into Robots Will Be Necessary.")
A.I. is after all, technology, and like all technology it is only valuable as it useful. In the early day of the mass production of automobiles there was at times chaos, vehicle safety was a problem, not only in terms the the vehicles themselves, but how we interacted with them. Seat belts, traffic lights, traffic laws, airbags and the advancement of those auxiliary technologies were developed, and continue to develop as we require the use of an automobile to suit our needs.
Certainly by the time The Singularity occurs, the safety of this human and machine interaction will be well ingrained and imbedded into this new intelligence. However looking further into the future, at a time when machines are creating and improving themselves, when they rely on their own intellect to make informed decisions about their construction; it is possible that these long developed safeguards could become convoluted or removed entirely. Perhaps even out of necessity would the morality of machines become misguided, we may require them to be devoid of "morality", as we require them to operate and be suited to more demanding tasks. This could backfire on us, and we may find ourselves at the mercy of a stronger, faster, smarter autonomous beings.
This prospect, though possible, is only one potential outcome, and certainly we would still posses the ability to create equally intelligent machines to safeguard us, even if we had to adapt more advanced technology. "What we need to do is create a mind within the humane pathway, what I have called a Friendly AI. That is not a trivial thing to attempt." ("Friendly AI Is Needed to Protect Humans from Unethical Super-Robots.")
There is danger in our undertaking to create A.I. without a doubt, and while there are no guaranties as to the overall safety of this technology and what it could become, these prospects underlie the importance of the thoughtful consideration, and carful design that must be implemented as we allow these machine more autonomy and more control over our lives.
"Whatever we really build, we will be the ones who built it. The danger is that we will construct AI without really understanding it." ("Friendly AI Is Needed to Protect Humans from Unethical Super-Robots."
The polarization of these ideas is a testament to the folly of human thought patterns, the black and white scenarios we create when confronted with the unknown are characterized by our history and permeate every aspect of human culture. There are indeed, more reasonable voices that may prevail, computer technology is not the only technology that is accelerating. Technology is increasing and accelerating on all fronts, we have mapped the human genome, and are currently translating its code. We are on a quest to understand ourselves and our minds, technology is not just increasing, separate disciplines are converging.
Technology is becoming an extension of the individual person, Google Glass is a perfect example of the push to integrate human beings with computers, this device allows the user to see superimposed images in the field of view, and allows them to interact virtually in the real word. As technology becomes increasingly smaller, we will inevitably find it on and in our bodies, interacting with us, guiding and informing us in our daily lives. Artificial hearts have already been in use for many years, and as more artificial devices become available, their use will increase as people will want to extend their lives and maintain their health and vitality.
Let us propose for example, that an artificial eye was created, along with the restoration of a persons sight, perhaps we could not only replace, but improve upon a persons lost vision. Infrared vision and magnification may be options, why would we settle for our current limited visual capabilities? "[Coming] technological revolutions will allow us to transcend our frail bodies with all their limitations." ("Future Technology Will Benignly Alter Human Existence.")
These marvels of technology are indeed a precursor of the direction we are headed, and things to come. Our desire to transcend our mortality will inevitably transform us into a hybrid of man and machine. As we advance, and require longer extensions of our lives, we may one day find ourselves more machine than human. This merger will not just be restricted to our bodies, but our minds will one day be integrated to benefit from enhancements as well.
"This merger of man and machine, coupled with the sudden explosion in machine intelligence and rapid innovation in the fields of gene research as well as nanotechnology, will result in a world where there is no distinction between the biological and the mechanical, or between physical and virtual reality." ("Future Technology Will Benignly Alter Human Existence.")
Our road to become superhuman is is a long one, and although recent advancements in technology and biology are impressive, they are a milestone that illustrates how far yet we are from the dreams of transhumanism. Our ignorance about our own minds seems a formidable obstacle, after all, how can we possibly hope to create another intelligent being if we don't fully understand our own consciousness. Technical revolutions in the study of the mind will not only allow us to enhance our own brains, but create a framework for creating highly intelligent thinking machine. "A theoretical neuroscientist at NSI insists that in an unpredictable world, mimicking the brain on a detailed level will provide advantages that other approaches cannot. For example, real brains often have a lot of redundancy when it comes to performing a particular task." ("Building Intelligent Machines by Copying Living Brains.")
These undertakings are currently underway, government and private research into the human mind is being conducted, and with every revelation and new discovery that enhances our self understanding, we will be one step closer to creating artificial intelligence while simultaneously enhancing our own minds. "These systems will arise, say the researchers, by emulating the brain's neurons and the way they are connected to each other." ("Building Intelligent Machines by Copying Living Brains.")
While our path to create beings not unlike ourselves may seem treacherous, so is our journey to recreate ourselves. These goals are one in the same, the advancements we make in our own biology will no doubt lead to revelations in computer technology, and visa-versa. As we delve deeper into science, dealing with increasingly smaller parts of matter, the lines between the biological and mechanical become blurred. The fears we have of Artificial Intelligence, real or imagined, are the consequence of the power that comes with knowledge. This is the human condition, this has always been our story. What will we do with our new found power, will we harness it and wield it? Will we exploit it and abuse it? This is the next step in our evolution, it is a test that will determine if we have the right to transcend our humble beginnings. We seem to be faced ever more frequently with the prospect of technology that will be our undoing. If we are to survive as a species, we must embrace the responsibility that is assumed with our increased understanding. Ultimately, our fate will be determined by our resolve to transcend our ignorance, and in the age of information, ignorance is our choice.
"We've arranged a society based on science and technology, in which nobody understands anything about science and technology. And this combustible mixture of ignorance and power, sooner or later, is going to blow up in our faces. Who is running the science and technology in a democracy if the people don't know anything about it?"
~ Carl Sagan.
Works Cited
Allen, Colin. "Engineering Morality into Robots Will Be Necessary." Robotic Technology. Ed. Louise Gerdes. Farmington Hills, MI: Greenhaven Press, 2014. Opposing Viewpoints. Rpt. from "The Future of Moral Machines." New York Times 25 Dec. 2011. Opposing Viewpoints in Context. Web. 1 Dec. 2014.
"Artificial Intelligence." Opposing Viewpoints Online Collection. Detroit: Gale, 2014. Opposing Viewpoints in Context. Web. 2 Dec. 2014.
Fox, Douglas. "Building Intelligent Machines by Copying Living Brains." Artificial Intelligence. Ed. Sylvia Engdahl. Detroit: Greenhaven Press, 2008. Contemporary Issues Companion. Rpt. from "Brain Box." New Scientist 188 (5 Nov. 2005). Opposing Viewpoints in Context. Web. 2 Dec. 2014.
Kurzweil, Ray. "Future Technology Will Benignly Alter Human Existence." Techology and Society. Ed. David Haugen and Susan Musser. Detroit: Greenhaven Press, 2007. Opposing Viewpoints. Rpt. from "Reinventing Humanity: The Future of Human-Machine Intelligence." Futurist (Mar.-Apr. 2006). Opposing Viewpoints in Context. Web. 1 Dec. 2014.
Kurzweil, Ray. "Kurzweil Responds: Don't Underestimate the Singularity." MIT Technology Review. MIT Technology Review, 19 Oct. 2011. Web. 02 Dec. 2014.
Muehlhauser, Luke. "Artificial Intelligence Will Exceed Human Intelligence." Facing the Intelligence Explosion. 2013. Rpt. in Robotic Technology. Ed. Louise Gerdes. Farmington Hills, MI: Greenhaven Press, 2014. Opposing Viewpoints. Opposing Viewpoints in Context. Web. 2 Dec. 2014.
Yudkowsky, Eliezer. "Friendly AI Is Needed to Protect Humans from Unethical Super-Robots." Artificial Intelligence. Noah Berlatsky. Detroit: Greenhaven Press, 2011. Opposing Viewpoints. Rpt. from "Why We Need Friendly AI." Terminator Salvation: Preventing Skynet. 2009. Opposing Viewpoints in Context. Web. 2 Dec. 2014.
By
Justin C. Harris
Monica Swaner
English 102: Lesson 11
5 December 2014
Justin C Harris
Monica Swaner
English 102: Lesson 11
December 5, 2014
Will Artificial Intelligence Mark the End of Humanity?
What does the phrase Artificial Intelligence mean to you? For many, it may conjure up images of killer robot's trying to systematically destroy humanity as in the popular Terminator movie franchise. For others, A.I. could be a friendly android trying to learn about itself, humanity, and the world it lives in, as depicted by the Star Trek TV series. No doubt science fiction and it's various mediums have influenced our notions about what Artificial Intelligence is and what it could be. The wide spectrum of possibility that is associated with A.I. is indeed based on ignorance, but that is only because A.I., as of now does not exist in a meaningful way, it is not relevant to the average person in their daily life.
Whatever your current ideas are about Artificial Intelligence, my research indicates that A.I. is real, it's coming, and it has the potential to change humanity so profoundly that we may be unrecognizable to ourselves. The dream of A.I. is not simply about creating an autonomous machine that can do man's bidding. Artificial Intelligence is about creating a new consciousness, one that can think on it's own, learn on it's own, and draw it's own conclusions. This is cutting edge stuff, it's uncharted territory, and like the first human sailors, we don't know what monsters or riches we may happen upon. Through However, like any celestial navigator, we can't know where we're going if we don't know where we are. So let us look honestly and objectively at the world of A.I., as it is now, so that we may catch a glimpse of the future. Will Artificial Intelligence ascend humanity to the heights of our potential, or bring us to our knees?
The answer to this question is not so black and white, although it is often framed that way. A.I. is, in its purest form, technology. Like all technology exploited by humans, Artificial Intelligence can be used simultaneously, for our benefit or destruction.
As of now, Cold War era nuclear bombs are being repurposed to have their radioactive material used in nuclear power plants. Most would agree, this is great news! This is also a fitting analogy to A.I. The proliferation of atomic energy is a perfect example of the immense power of technology. The power to destroy the world and every living thing a hundred times over, and simultaneously the power to bathe all of humanity in light, warmth, and energy for thousands of years. In case of A.I., the situation can not be understated.
Artificial Intelligence is likely to come into existence, but it does not pose a realistic threat to humanity. While there is no clear indication that A.I. will be inherently dangerous, there are genuine concerns regarding the use and implementation of this technology. The implications of a higher intelligence interacting with the human species represents a new potential in human evolution.
One of the main arguments surrounding artificial intelligence is the question of it's existence in a meaningful way. The concept used to define this paradigm shift is called The Singularity. Put Simply The Singularity is the moment when artificial intelligence exceeds human intelligence. This is a moment at some time in the future, and because true artificial intelligence does not currently exist, some question if it is even possible. How can we know the future, there is no certainty about what advances we, the human species, will make in due time.
Yet, we can look at our current path, and we can look to where we have been to see where we might be headed, and we should consider that "Enormous progress has been made in the field of robotics since the 1980s, and only time will tell what scientists will be able to create over the next century." ("Artificial Intelligence.") Indeed, humanity has come along way from it's humble beginning. We have advanced much, we have harnessed the power of the atom, begun the exploration of our solar system, and for all of our achievements, our most significant accomplishments have occurred in the last two-hundred years of our one-hundred-thousand year existence. In a broad view, this denotes an acceleration in our progress, and we can make short term projections based on this analysis. Recently, "As of February 1, 2010, AI's Web site reports that a machine is able to communicate at the level of an eighteen-month-old child. The programmers at AI believe Hal (the machine) will be able to communicate at an adult level in the next ten years." ("Artificial Intelligence.")
This acceleration is an observation, and has given rise to concepts such as Moore's Law, the principle that states the transistors in an integrated circuit double approximately every two years. Based on this principle, we can extrapolate it's relevance in our society and economy. A strong advocate of A.I., computer scientist and inventor, Ray Kurzweil, has proposed a concept called the law of accelerating returns where he states, "...Each technology project and contributor is unpredictable, yet the overall trajectory as quantified by basic measures of price-performance and capacity nonetheless follow remarkably predictable paths." ("Kurzweil Responds")
Dissenting voices would argue that we are too far away from a working model of A.I. to know if it's possible, and that our current lack of understanding about our own minds creates a paradox in our hopes to create another intelligent being. While these criticisms have merit, there is no controversy about the fact that computer technology will continue to increase in capacity and decrease in size at an accelerating rate. As engineers and inventors continue to push the envelope science and technology, they work diligently with a focused effort, and from those efforts it is likely that we will one day marvel when "Eventually the AI becomes sophisticated enough to start improving itself-not just small improvements but improvements large enough to cascade into other improvements ... and the AI leaves our human abilities far behind." ("Artificial Intelligence Will Exceed Human Intelligence.")
The day the singularity comes, will mark a turning point for humanity, and what can we expect from this turning point. Theres are so many possibilities, for good or ill. As we have alluded to history previously, history has certainly shown us that the future is not always what we expect, but what can we expect from A.I. once it has come into fruition? Primarily we expect it to help us solve our problems, that is the fundamental purpose behind creating it. However, there is a paradox in creating a superior intelligence to surpass our limitations, the question is how can we possibly hope to control a superior intelligence? The development of A.I. has been, and will continue to be a gradual process, it's development will be suited to our needs. The design of A.I. is critical to understanding its purpose, and it's design will be reliant on interaction with human beings. As such, the need to develop safeguards for that interaction will begin early in its construction, and persist through its advancement. "As machines with limited autonomy operate ... in open environments, it becomes increasingly important to design a kind of functional morality that is sensitive to ethically relevant features of [given] situations." ("Engineering Morality into Robots Will Be Necessary.")
A.I. is after all, technology, and like all technology it is only valuable as it useful. In the early day of the mass production of automobiles there was at times chaos, vehicle safety was a problem, not only in terms the the vehicles themselves, but how we interacted with them. Seat belts, traffic lights, traffic laws, airbags and the advancement of those auxiliary technologies were developed, and continue to develop as we require the use of an automobile to suit our needs.
Certainly by the time The Singularity occurs, the safety of this human and machine interaction will be well ingrained and imbedded into this new intelligence. However looking further into the future, at a time when machines are creating and improving themselves, when they rely on their own intellect to make informed decisions about their construction; it is possible that these long developed safeguards could become convoluted or removed entirely. Perhaps even out of necessity would the morality of machines become misguided, we may require them to be devoid of "morality", as we require them to operate and be suited to more demanding tasks. This could backfire on us, and we may find ourselves at the mercy of a stronger, faster, smarter autonomous beings.
This prospect, though possible, is only one potential outcome, and certainly we would still posses the ability to create equally intelligent machines to safeguard us, even if we had to adapt more advanced technology. "What we need to do is create a mind within the humane pathway, what I have called a Friendly AI. That is not a trivial thing to attempt." ("Friendly AI Is Needed to Protect Humans from Unethical Super-Robots.")
There is danger in our undertaking to create A.I. without a doubt, and while there are no guaranties as to the overall safety of this technology and what it could become, these prospects underlie the importance of the thoughtful consideration, and carful design that must be implemented as we allow these machine more autonomy and more control over our lives.
"Whatever we really build, we will be the ones who built it. The danger is that we will construct AI without really understanding it." ("Friendly AI Is Needed to Protect Humans from Unethical Super-Robots."
The polarization of these ideas is a testament to the folly of human thought patterns, the black and white scenarios we create when confronted with the unknown are characterized by our history and permeate every aspect of human culture. There are indeed, more reasonable voices that may prevail, computer technology is not the only technology that is accelerating. Technology is increasing and accelerating on all fronts, we have mapped the human genome, and are currently translating its code. We are on a quest to understand ourselves and our minds, technology is not just increasing, separate disciplines are converging.
Technology is becoming an extension of the individual person, Google Glass is a perfect example of the push to integrate human beings with computers, this device allows the user to see superimposed images in the field of view, and allows them to interact virtually in the real word. As technology becomes increasingly smaller, we will inevitably find it on and in our bodies, interacting with us, guiding and informing us in our daily lives. Artificial hearts have already been in use for many years, and as more artificial devices become available, their use will increase as people will want to extend their lives and maintain their health and vitality.
Let us propose for example, that an artificial eye was created, along with the restoration of a persons sight, perhaps we could not only replace, but improve upon a persons lost vision. Infrared vision and magnification may be options, why would we settle for our current limited visual capabilities? "[Coming] technological revolutions will allow us to transcend our frail bodies with all their limitations." ("Future Technology Will Benignly Alter Human Existence.")
These marvels of technology are indeed a precursor of the direction we are headed, and things to come. Our desire to transcend our mortality will inevitably transform us into a hybrid of man and machine. As we advance, and require longer extensions of our lives, we may one day find ourselves more machine than human. This merger will not just be restricted to our bodies, but our minds will one day be integrated to benefit from enhancements as well.
"This merger of man and machine, coupled with the sudden explosion in machine intelligence and rapid innovation in the fields of gene research as well as nanotechnology, will result in a world where there is no distinction between the biological and the mechanical, or between physical and virtual reality." ("Future Technology Will Benignly Alter Human Existence.")
Our road to become superhuman is is a long one, and although recent advancements in technology and biology are impressive, they are a milestone that illustrates how far yet we are from the dreams of transhumanism. Our ignorance about our own minds seems a formidable obstacle, after all, how can we possibly hope to create another intelligent being if we don't fully understand our own consciousness. Technical revolutions in the study of the mind will not only allow us to enhance our own brains, but create a framework for creating highly intelligent thinking machine. "A theoretical neuroscientist at NSI insists that in an unpredictable world, mimicking the brain on a detailed level will provide advantages that other approaches cannot. For example, real brains often have a lot of redundancy when it comes to performing a particular task." ("Building Intelligent Machines by Copying Living Brains.")
These undertakings are currently underway, government and private research into the human mind is being conducted, and with every revelation and new discovery that enhances our self understanding, we will be one step closer to creating artificial intelligence while simultaneously enhancing our own minds. "These systems will arise, say the researchers, by emulating the brain's neurons and the way they are connected to each other." ("Building Intelligent Machines by Copying Living Brains.")
While our path to create beings not unlike ourselves may seem treacherous, so is our journey to recreate ourselves. These goals are one in the same, the advancements we make in our own biology will no doubt lead to revelations in computer technology, and visa-versa. As we delve deeper into science, dealing with increasingly smaller parts of matter, the lines between the biological and mechanical become blurred. The fears we have of Artificial Intelligence, real or imagined, are the consequence of the power that comes with knowledge. This is the human condition, this has always been our story. What will we do with our new found power, will we harness it and wield it? Will we exploit it and abuse it? This is the next step in our evolution, it is a test that will determine if we have the right to transcend our humble beginnings. We seem to be faced ever more frequently with the prospect of technology that will be our undoing. If we are to survive as a species, we must embrace the responsibility that is assumed with our increased understanding. Ultimately, our fate will be determined by our resolve to transcend our ignorance, and in the age of information, ignorance is our choice.
"We've arranged a society based on science and technology, in which nobody understands anything about science and technology. And this combustible mixture of ignorance and power, sooner or later, is going to blow up in our faces. Who is running the science and technology in a democracy if the people don't know anything about it?"
~ Carl Sagan.
Works Cited
Allen, Colin. "Engineering Morality into Robots Will Be Necessary." Robotic Technology. Ed. Louise Gerdes. Farmington Hills, MI: Greenhaven Press, 2014. Opposing Viewpoints. Rpt. from "The Future of Moral Machines." New York Times 25 Dec. 2011. Opposing Viewpoints in Context. Web. 1 Dec. 2014.
"Artificial Intelligence." Opposing Viewpoints Online Collection. Detroit: Gale, 2014. Opposing Viewpoints in Context. Web. 2 Dec. 2014.
Fox, Douglas. "Building Intelligent Machines by Copying Living Brains." Artificial Intelligence. Ed. Sylvia Engdahl. Detroit: Greenhaven Press, 2008. Contemporary Issues Companion. Rpt. from "Brain Box." New Scientist 188 (5 Nov. 2005). Opposing Viewpoints in Context. Web. 2 Dec. 2014.
Kurzweil, Ray. "Future Technology Will Benignly Alter Human Existence." Techology and Society. Ed. David Haugen and Susan Musser. Detroit: Greenhaven Press, 2007. Opposing Viewpoints. Rpt. from "Reinventing Humanity: The Future of Human-Machine Intelligence." Futurist (Mar.-Apr. 2006). Opposing Viewpoints in Context. Web. 1 Dec. 2014.
Kurzweil, Ray. "Kurzweil Responds: Don't Underestimate the Singularity." MIT Technology Review. MIT Technology Review, 19 Oct. 2011. Web. 02 Dec. 2014.
Muehlhauser, Luke. "Artificial Intelligence Will Exceed Human Intelligence." Facing the Intelligence Explosion. 2013. Rpt. in Robotic Technology. Ed. Louise Gerdes. Farmington Hills, MI: Greenhaven Press, 2014. Opposing Viewpoints. Opposing Viewpoints in Context. Web. 2 Dec. 2014.
Yudkowsky, Eliezer. "Friendly AI Is Needed to Protect Humans from Unethical Super-Robots." Artificial Intelligence. Noah Berlatsky. Detroit: Greenhaven Press, 2011. Opposing Viewpoints. Rpt. from "Why We Need Friendly AI." Terminator Salvation: Preventing Skynet. 2009. Opposing Viewpoints in Context. Web. 2 Dec. 2014.