Sei sulla pagina 1di 12

Good luck and get your A+ 😉

Initial claim:

I claim that creating a number of tests to an AI in a vertical reality system before lunching it in the
real world will erase bill joy’s and other authors fears from the abstruse future of such technology because
the testing system in an vertical world will allow to observe the behavior and the security of the product
and detect the mistakes with no effect on humanity which will lead to the pleased results and the meant
purpose with no dangers , in fact some of the leading technology companies such as google tested game
theory situations to see whether two artificial intelligence can learn to work together for a mutual benefit
which allowed them to observe the behavior of the AI. Critics may argue that the system may endure
some failure, but the critics didn’t consider that the system will incredibly increase the safety of AI and
any system in the world may suffer with bugs that can be fixed during the presses of developing. I assume
that AI is an important technology that may lead to a singularity in the human race future but if the
scientists didn’t apply the proper protocols of examinations the result will be unpleased. My research is
based on providing a solution to avoid further consequences and such technology need to be taken
seriously. Creating a limit for a technological revolution is impossible but creating a system that determine
the Specifications to agree on the technological invention is the proper solution.
Structure:

Fear from Artificial intelligence future


1) Bill joy’s fears
a) Human race would become extinct or slaves to the superior machines
b) AI will be able to self-replicate and they will extend the number of humans
c) Downloading a person’s subconscious into a computer

2) Stephen Hawking prediction


a) AI will redesign itself at high rate and they will be better than humans
i) Humans won’t be able to compete AI

3) Suggested solutions
a) Create a system of tests.
i) Closed area test
(1) Observe the behavior of AI with humans
(a) will the AI act differently outside the virtual world
(2) private network
(i) AI won’t be able to escape the test area

ii) virtual world test


(1) observe the behavior
(a) will the AI be able to differentiate between the right and wrong
(b) how AI will deal with humans
(i) will the compete humans and put them under hard challenges
(ii) AI will cooperate with humans in order to serve them
(c) what’s the intentions of AI
(i) Replicate and rule the world.
(ii) providing a better life for humanity
(2) test the security
(a) experts will try to hack the AI in order to test the security
(b) how hard will normal people to take control of the technology
Idk what is that:

“A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren't aligned
with ours, we're in trouble” (Hawking qtd, Griffen 2015) this claim was made by Professor Stephen
Hawking, who shows that his thoughts about the future with artificial intelligence is close to Joy’s
understanding of how such a technology will behave, but he introduced his claim in an unbiased
justification, bringing the benefits and the drawbacks equally. Hawking did clear an idea where we will be
smarter that the AI we create “we evolved to be smarter than our ape-like ancestors, and Einstein was
smarter than his parents” (Hawking qtd, Griffen 2015) this decides that we won’t lose the knowledge
control in the future even if AI became more intelligent than us for a while.

“If there is AI, there will be AI+” (Chalmers 2010, p10). That is what David J. Chalmers argues in his journal
article “The Singularity: A Philosophical Analysis” where he believes that AI will be smart enough to create
smarter AI. In that point, it will be harder for humans to compete with AI, and they will be overbeaten
with the AI+ cleverness. The high-level Artificial Intelligence ability of creation should be minimized to the
least amount so that humans won’t lose the control.

Ray Kurzweil predicts that Computers will exceed the intelligence of humans by 2029 and they will be able
to understand what we say and will have the ability to learn (Khomami 2014). In case Kurzweil claim is
right so after 12 years will we see a new technological species will join our world. We should take that
claim as an alarm and start to get ready to merge with such technology.

Isaac Asimov shares his view of robots ethical rules in his book I, Robot in 1950, the three laws are as
follows:

“1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm

2. A robot must obey the orders given it by human beings, except where such orders would conflict with
the First Law.

3. A robot must protect its existence, as long as such protection does not conflict with the First or Second
Law.“ (Asimov 1950) could be the perfect standards for AI where it should be the primary behaviors that
scientist should observe while testing the Artificial intelligence. Google did apply an experiment to see the
behavior of an AI with another AI in two different games which are Gathering and Wolfpack and the results
were exciting, but disappointing at the same time where the AI agents became more cooperative or
antagonistic depending on the situation of the game (Leibo et all , 2017). Using Google experiment idea
and with Isaac rules, we can come up with tests to measure if the AI matches the standards that keep the
world safe. My main idea is to create a standard for artificial intelligence technology and test it before
releasing it in the world. The first test will be locking the Artificial intelligence in a virtual world that
simulates the people’s lives and problems which will give the scientist the chance to observe their faults
without harming the real world. The first test will focus on revealing the immoral behavior of the AI by
putting it under circumstances that involve anger, greed, envy, competitions, etc. The further phase of
the examination will observe the decisions of the AI, as an example a situation where the world air will be
polluted to a toxic level, so the AI will have the choice of terminating half of the humans in the planet
which they are the primary cause of the problem or providing a better and smart solution for the problem
that saves the humanity. The final stage the program will test in the AI, which will have the intention to
replicate itself or create a smarter AI, and this action is unacceptable because it will cause the humans to
lose the control.

The first test will also be useful for AI’s that have limited functions which exist between us such as self-
driving cars and cancer diagnose systems so that the developers can detect the failures and fix it and it
will result in avoiding accidents caused by vehicles automated by artificial intelligence.

The second test will aim to measure the security of the AI, and it will take place in the real world. This
phase will require a private room with a highly protected network isolated from the world to avoid any
external interference and keeping the AI locked. The first stage will be a group of professional hackers
trying to take control of AI or steal information. The professional hackers will measure the security of the
AI, and they will help the developers to detect the system bugs. The second stage will be by a group of
computer and robotics engineers where they will try to duplicate the AI, and they will rate how
complicated the structure is. The more sophisticated the AI, the more secure it is because creating a
challenging technology will prevent it from getting in the wrong hand.

This article educates the reader about the benefits of AI and detected the reasons of leader thinker’s fears
from the future of this technology and suggested satisfactory solutions. Sooner or later the world will
approach higher levels of artificial intelligence that will lead to a diversion in our daily life. Because of the
indefinite future and the unpredictable behavior, we should use the virtual reality as a test area, and as a
result, we will avoid the catastrophes of the developed AI. Developers should be responsible and carry
noble aims while they invent or else the world and the human race will face a tragic result.
Final paper:

The vague future of Artificial Intelligence

Human level artificial intelligence is an approaching technology that will create a

extraordinary change to the human civilization, which is the first step to the technological

singularity theory that accelerates the evolve of technology at a high rate because of the first

developed artificial intelligence resulting further intelligent technology that will appear readily.

The unidentified behavior and the results of this technology may require a number of examinations

to measure if the technology matches particular standards that have been created by major

scientists in order to keep the world safe from the technological failures. Some tests showed that

artificial intelligence might be aggressive, and that this technology could obtain unpleased and

unethical aims. Many scientists presume that this technology may carry some dangers, because of

its intelligence that might surpass the human’s intelligence and the ability to take decisions without

any human interference. They claim that such technology does have an unclear future because they

can’t determine its behavior and results. However, testing this technology in an isolated area from

the world, such as an environment made by the virtual reality technology will allow the developers

to observe the behavior and detect the dangers that thinkers fear. I agree that the human level AI

could carry threats to the world, but we can’t deny the countless benefits that will make our life

better. The future is always unpredictable, and we can’t limit the technological development that

require from us to be prepared for the upcoming super technology era.

This paper structured in three parts. It starts with a brief explanation of artificial intelligence

and its current benefits and uses. The first part will allow the reader to obtain enough information
and facts about artificial intelligence to introduce him to this technology. The second part discusses

the major arguments about the future of AI and criticize the main reasons of the leading thinker’s

fears from artificial intelligence future. The analysis that based on the scientist’s arguments will

build the final part that suggests proper and reasonable solutions to advise the developers with the

idea that it may make AI safe in all conditions. One of the primary purposes of this paper is

artificial intelligence technology can’t be stopped from approaching our daily life, but our current

technologies such as virtual reality technology could be useful in avoiding the consequences and

control the flow of artificial intelligence. This paper will help in correcting the reputation of AI

that was misrepresented in the science fiction movies as the evil supercomputer that takes over the

world and attempts to terminate humans. The paper will erase most of the fears from the

community by providing solid solutions, resulting to coexist with AI and accept it to be involved

in our daily lives.

Science fiction movies represent AI as ultra-smart robots that make our life better then

suddenly the system face a failure, and those robots decide to declare war on humans. The movies

that involve AI mostly cause a fear of this technology in public causing an opposition that object

AI technology which will create a media pressure that will affect the funding of such technology.

The use of AI provides us with more accurate and correct results because of its ability to take right

decision based on calculations and algorithms. It’s more like a human mind that thinks and make

decisions but made with silicon and ultra-fast processor. Modern technology aims to provide the

world with smart services that decrease the effort of humans and makes the tasks much easier and

AI is one of the first steps that leads to providing accurate and professional work that serves

humanity. Recently, doctors started to use artificial intelligence technology made by Google in

diagnosing breast cancer resulting a remarkable achievement in spotting the tumors, and the doctor
decides if it’s cancer or (McFarland 2017). The AI that Google developed is an excellent example

of how the technology can be involved in saving human lives, as AI provide doctors with a rather

accurate diagnosis, which have beneficial outcome to the wellbeing of patients. Another

technology that is still in progress is the Human Level artificial intelligence (HLAI), which is seen

as a rather developed version of the common AI that is able to think and behave independently.

By using its complex algorithms instead of human interventions, therefore, the HLAI may have

the potential to provide solutions for the current societal issues that the human mind is unable of

handling it. Moreover, the fact that HLAI is capable of improving upon itself could lead to even

more technological revolutions.

The reputation of AI has been affected by many leading thinkers such as Joy, who

published an article that claims AI and robots are unpredictable by their nature. In his article Joy

argues that the unknown behavior of artificial intelligence creates different scenarios about the fate

of humanity. One of those scenarios is that humans will be under the mercy of machines because

people will grow accustomed to include AI into many aspects of their daily lives. Another scenario

is that AI would disobey the commands of its controller, that being humankind, and start to harm

people or even terminate them (Joy 2000).

It is safe to agree with Joy’s concept, that people can’t determine the future of this

technology. The behavior of the upcoming AI causes fear amongst people, as it may carry

unpleased aims and compete with humans to take control.” Robots, engineered organisms, and

nanobots share a dangerous factor: they can self-replicate” (Joy 2000). The claim Joy made raises

concerns to an alarming point, as it guides the tests and tells what kind of observation is needed

because the self-replicate factor which is the system act of producing a copy of itself is a dangerous

act that leads to lose control.


Stephan Hawking argues that “A super intelligent AI will be extremely good at

accomplishing its goals, and if those goals aren't aligned with ours, we're in trouble” (Hawking

qtd, Griffen 2015).Professor Hawking thoughts about the future at artificial intelligence are near

to Joy’s claim about understanding of how such technology will behave, but he introduced his

claim in an unbiased justification, bringing the benefits and the drawbacks equally. Hawking

argues that we will be smarter that the AI we create, as he stated; “we evolved to be smarter than

our ape-like ancestors, and Einstein was smarter than his parents” (Hawking qtd, Griffen 2015).

Hawking argument clarifies that eventually humans will be more intelligent than the created AI

even if people lose the intelligence lead for a while.

David J. Chalmers argues in his journal article “The Singularity: A Philosophical

Analysis”; “If there is AI, there will be AI+” (Chalmers 2010, p10). as he believes that AI will be

smart enough to create smarter AI which is AI+, and at that point it will be harder for humans to

control this technology, and they will be overbeaten by the AI+ cleverness. The high-level artificial

intelligence ability of creation should be minimized to the least amount, so humans won’t lose the

control.

Ray Kurzweil predicts that Computers will exceed the intelligence of humans by 2029 and

they will be able to understand what we say and obtain the learning ability (Khomami 2014).

Depending on Kurzweil claim; after 12 years new technological species will join our world, so

humans must take that claim as an alarm and start to get prepared for this technology. Isaac Asimov

shares his robots ethical rules in his book I, Robot in 1950where he states in the first law that robots

must not harm humans and they should protect them, moreover, the second law states that robots

must follow human’s orders only if the orders does not have a conflictions with the first law, and

finally the last law is that the robot does have the right to protect it existence as long as the
protection doesn’t break the first law (Asimov 1950). Asimov rules might be the perfect standards

for AI where it should be the primary faults that scientist should observe while testing the Artificial

intelligence. Google did apply an experiment that puts two artificial intelligence together in two

different games which are Gathering and Wolfpack resulting that the agents became more

cooperative or antagonistic depending on the situations of the games (Leibo et all , 2017). Using

Google experiment idea and with Isaac rules, humans can come up with tests that measure if the

AI matches the required standards that keep the world safe. My main idea is to create some tests

for the AI before using it. The first test will be locking the artificial intelligence in a virtual world

that simulates the human’s daily lives and problems which will give the scientist the chance to

observe their faults without putting the world under threat. The first test will focus on revealing

the immoral behavior of the AI by putting it under certain circumstances that involve anger, greed,

and envy, etc. The further phase of the examination will observe the decisions of the AI, as an

example a situation where the world air will be polluted to a toxic level, so the AI will have the

choice of terminating half of the humans which they are primary cause of the problem or providing

a better and smart solution for the problem to save the humanity. The final stage of the program

will test if the AI will have the intention to replicate itself or create a smarter AI by providing it

with the needed objects as this action is rejected.

The first test will also be useful for AI’s that have limited functions which exist between

us such as self-driving cars and cancer diagnose systems so that the developers can detect the

failures and fix it resulting avoiding accidents caused by vehicles automated by artificial

intelligence.

The second test aims to measure the AI security, and it will take place in the actual world.

This phase will require a private room with a highly protected network isolated from the to avoid
any external interference and keeping the AI sealed. The first stage requires a group of professional

hackers who try to take control of an AI or steal its information. The professional hackers will rate

the security of the AI and provide the developers with the system bugs. The second stage will be

by a group of computer and robotics engineers trying to duplicate the technology to measure how

complex the structure is. The more sophisticated the AI, the more secure it is because creating a

complicated technology will prevent it from getting copied by the wrong people.

This article educates the reader about the benefits of AI and the detected the reasons of

leader thinker’s fears from the future of this technology and suggested satisfactory solutions.

Sooner or later the world will approach higher levels of artificial intelligence that will lead to a

diversion in our daily life. Because of the indefinite future and the unpredictable behavior, we

should use the virtual reality as a test zone, and as a result, people will avoid the catastrophes of

the developed AI. Developers should be responsible and carry noble aims while they invent or else

the world and the human race will face a tragic result.

Work Cited

Leibo, Joel , Vinicius Zambaldi, Marc Lanctot, Janusz Marecki, and Thore Graepel. "Multi-agent

Reinforcement Learning in Sequential Social Dilemmas." February 10 , 2017.

https://storage.googleapis.com/deepmind-media/papers/multi-agent-rl-in-ssd.pdf.
Chalmers, David J. "The Singularity: A Philosophical Analysis." the Journal of Consciousness

Studies 17:7-65, 2010. doi:10.1002/9781118922590.ch16.

Asimov, Isaac. I, robot. New York: Ballantine Books, 1950.

Griffin, Andrew. "Stephen Hawking: Artificial intelligence could wipe out humanity when it gets

too clever as humans will be like ants." The Independent. October 08, 2015. Accessed March 24,

2017. http://www.independent.co.uk/life-style/gadgets-and-tech/news/stephen-hawking-

artificial-intelligence-could-wipe-out-humanity-when-it-gets-too-clever-as-humans-

a6686496.html.

Joy, Bill. "Why the Future Doesn’t Need Us." Wired. April 01, 2000. Accessed March 24, 2017.

https://www.wired.com/2000/04/joy-2/.

Khomami, Nadia. "2029: the year when robots will have the power to outsmart their makers." The

Guardian. February 22, 2014. Accessed March 24, 2017.

https://www.theguardian.com/technology/2014/feb/22/computers-cleverer-than-humans-15-

years.

McFarland, Matt. "Google uses AI to help diagnose breast cancer." CNNMoney. March 3, 2017.

Accessed March 24, 2017. http://money.cnn.com/2017/03/03/technology/google-breast-cancer-

ai/.

Potrebbero piacerti anche