Sei sulla pagina 1di 14

BOLIVARIAN REPUBLIC OF VENEZUELA

MINISTRY OF PEOPLE'S POWER FOR DEFENSE


NATIONAL UNIVERSITY EXPERIMENTAL POLYTECHNIC
OF THE BOLIVARIAN NATIONAL ARMED FORCE
UNEFA- NÚCLEO LA GUAIRA
FIRST SEMESTER OF ENGINEERING IN DAYTIME SYSTEMS
ENGLISH

Artificial Intelligence

Teacher: Student:

Rocio Hernandez Jaime Vargas

CI 28013002

Catia La Mar, April 6, 2020

Introduction
In this work, the aim is to show in a general way the vast subject of artificial
intelligence, the history behind it, who formed its bases, how the present day evolved
around it, its uses and applications and the point of view of some people and what
may be the future of this discipline
Artificial intelligence is known for being a branch of computational science ,
which focuses on the creation of computational algorithms that can show intelligent
behavior , characteristic of living beings and some humans and their different
applications in life, that is, when create computer programs that can mimic own
cognitive functions of the v beings ivos and some activities can only be carried out
( for the time ) by human beings in order to improve our lifestyles, these functions are
usually the perception, Reasoning, learning and resolution of problems.

A lthough it is worth noting that many times and while the technology
advances and the "machines" get more capacity processing , certain types of
technologies that were once thought that required such intelligence , it is removed
from this definition, the example most common is that of optical character recognition
, is no longer perceived as an example of "artificial intelligence" having become a
common technology, but other advances are still classified as artificial intelligence,
such as autonomous driving systems or those capable of playing to chess or Go.

These systems are able to analyze data in large quantities or also known as
big data, identifying patterns and trends and to formulate accurate predictions
consistent so automatic and much more r uick than any human being,

The term or expression of artificial intelligence , was first used in the year
1956 by John McCarthy at Dartmouth College in the United States during a
conference in whole Marvin Minsky and Claude Shannon , which defined as "the
science and ingenuity of making smart machines, especially smart computer
programs. "

A lthough the idea existed many before, being able to go back to the time d
and Aristotle , who made a description of basic rules describing a part of the
functioning of the mind to get rational conclusions or the myth of the giant Talos,
legendary automaton artificial intelligence , or later in the year 1315 when Ramon
Llull in his book Ars magna , conceived an idea that thought could be done artificially
, but s greatest advances in the branches were made starting in the year 1936 , of
the hand of the mathematician Alan Turing a mathematician considered one of the
fathers of computer science and a forerunner of modern computing (Remember that
computing is considered to have been born with lovelace and its programmable
analytical machine in 1842) , Turing set several of the bases for this discipline during
his works on mathematics and computing.

One of his first contributions was his "machine automatic " , in 1936 in the
journal " Proceedings of the London Mathematical Society " , this machine which
was better known as "the machine of Turing" , was a simple device that manipulates
symbols on a strip of tape according to a table of rules, but this is not a practical
computing device but a hypothetical device that represents a computer , which helps
to understand the limits of mechanical calculation, and that there were problems
that a machine did not could solve. This had two important things in the development
of computing: the first was that it was one of the first theoretical models for the
computer; second, the Turing machine has served as the basis for much theoretical
development in computer science and complexity theory. The reason for this is that
Turing machines is a simple theory , and therefore it is easy to do analysis on it .

Other of his contributions now more focused on artificial intelligence is the


project of the ACE (Engine Computer Automation) at the National Physical
Laboratory (NPL) in the year 1947 to the 1950 (at which time he also wrote some
articles intelligence artificial) , which was a machine , that could be configured to do
algebraic calculations, decrypt codes, manipulate files and play chess, although a
feasible project ( even for the time ) , was never completed, although during those
years it made great advances for the science of the computer .

And for 1950 he published one article of name "Computing Machinery and
Intelligence" in which among other things deals with the problem of artificial
intelligence and proposed an experiment now known as the Turing test, which I
intended to define a standard test by which a machine could be classified as
"sensitive" or "sentient" or in other words in which a Machine could not be
differentiated from a person, also in this article , suggested that instead of
constructing a mind that simulated to an adult, it was more easy and feasible to
make the simulation of the I nte of a child and then raise him , miserable mind his
career ended by some social problems in 1952 and two years more later died

One of the most interesting things in this was that the Turing test was
successfully passed by u n computer that making believe several i nterrogador is
(over thirty) who is a person who answers your questions during an organized event
in London by the University of Reading (United Kingdom). The computer program
developed with Eugene in St. Petersburg (Russia), has been passed by a boy of 13,
and r esponsables of the competition consider it a "historic milestone of artificial
intelligence"

P ara the decade of the 60's this branch began to see great progress , no
longer theoretical but practical with two new developments : the maching learning
and also that the United States Department of Defense began to show interest in this
approach technological , and began to make plans for the creation of machines that
imitate the basic human reasoning, one of the projects more interesting was that
done by the DARPA (projects Agency Advanced Research Defense) I make maps or
floor plans of streets in the 70's and led in Smart Personal Assistants for 2003, Years
before Alexa's commercial artificial intelligence , Si ri or Cortana was born and
became popular

Other advances that were made during these years was the ELIZA project.
Developed at MIT, it was one of the first programs to process natural language and
converse through a series of programmed phrases.

But the advances made in these times were very few and this discipline was
thinking of abandoning it due to the few advances that even government projects
had that looked promising, and it was not until the rise of the internet that this
discipline would see the light again

Although almost all the original projects of artificial intelligence had a specific
objective , which was to describe natural intelligence so precisely in an algorithm ,
that a machine was able to read and process it, which was given the term
“Intelligence Artificial Generic ”which we know more from the concept that is given in
science fiction from different areas , such as in literature such as the books of the
acclaimed writer Isaac Asimov who also went through hypotheses within his books
gave input to these and other discipline such as robotics , or even questioning moral
and philosophical artificial intelligences or technological advances in if same , or
even in the world of cinema as are iconic films such as "2001 , a space odyssey" or "
Matrix ” .
But also this discipline created specific areas of research on areas of what we
know as intelligences and they are the ones that today have given significant results
in people's daily lives, this is known as “Non-generic or weak artificial intelligence ”

E hese terms jurisdiction first described by the philosopher John Searle in a


critical article on e ste issue , which also introduced a subtype of the known AI
Generics which was the stronghold, said that the strong version was not a simulation
of the mind but, a mind and consequently it would have an intelligence at least equal
to that of any human being, the difference between the " generic " and the " strong
generic ", was that the "Generic" s could be Focused on multiple tasks more lacked
consciences or mental states or feelings which if it has the "Strong" version, on the
other hand the weak ones were not multitasking and therefore they were not generic
and consequently it was impossible for them to have any type of mental state , this
made that by being more focused on something specific, their decision-making
capacity was generally superior to any human (Only in that area where it was
designed a, and as long as they did not have to process a lot of context or that it was
not relevant )

The development of large feature artificial intelligences has resulted in many


ramifications but Stuart Russell and Peter Norvig differentiated four types of
ramifications in 2009 which were

● systems that think like humans, such as artificial neural networks. These
are the most basic of all that can be said to create the basis for others,
because here are created systems that automate activities which we link
with processes l human thought , as decision-making and
Apprenticeships
● Systems that act like humans, like robots. Or systems that try to do tasks
that at the moment can only be done by humans, such as facial
expressions or movements such as dance or parkour. An example of this
is the Boston Dynamics Atlas robot, capable of flipping in the air.
● Systems that use rational logic , like expert systems. Which are systems
that try to emulate a human expert in a certain subject. From a technical
service worker to a receptionist, movie buff or economist.
● systems that act rationally, such as intelligent agents , which were the
first to arrive in everyday life . Which were entities capable of giving a
response through the annals of data according to specific rules , or those
we know today as chatbots who were capable of having a conversation
as a person, the most famous example was ALICE which It was created
in early 2000 or its successor which is Mitsuku, which has been awarded
the Loebner prize for the best chatbot in 2013 and from 2016 to 2018

Some other disciplines that have separated from this more still have a
relationship such as Machine Learning which is a branch or sub-discipline of
computer science born of artificial intelligence, which is produced by the science of
computing and neuroscience born from the idea of Alan Turing who asked the
question of whether it is possible that machines could think and that it was more
feasible to teach them from scratch than to do them from scratch.

A fi nal decade of the 50's and even in the early 60's. With this approach, it was
intended to make machines learn about something without having to be expressly
programmed for it. Over the years, this discipline began to focus on different
computational problems, such as probabilistic reasoning, research based on
statistics, the information retrieval, and continued to deepen more and more in the
recognition of patterns, by the 90's it was consolidated as a discipline and ceased to
be a sub-discipline of artificial intelligence, although many Orthodox continue to
consider it to be part of this ,

E l main objective of the machine learning is to address and solve practical


problems and for this pattern recognition (cornerstone of this discipline) that
manages to convert data into programs or algorithms capable of drawing inferences
from new data sets used for non It has been previously trained. A very important
feature of these algorithms is the prediction of new cases , based on the experience
learned from the data set , which were used for their training, this is known as
generalization.

Machine Learning is generally divided into three branches "The Supervised", in


which each data entered has a "Label", as each data has a variable or a data that
indicates what happened, what happened, this learning is divided in two,
classification and regression. In classification, the outputs of the system are finite or
in other terms it has limits, they are also discrete and are interpreted as the class to
which it belongs, example "0" or "1", "False" or "True ”,“ Yes ”or” No ”, while in the
regression, the outputs are continuous. Examples of the use of this type are voice
recognition, spam detection, handwriting recognition among many others.

Another type of learning is "Unsupervised" and unlike the previous one, the data
here does not have the aforementioned "Label" or it is very vague, that is, there is no
objective variable and what is required is to look for patterns. To determine what you
want to predict you can find structures on the data, within these structures you can
mention clustering (partitioning a set of data in a set of significant subclasses called
groups) and association (set of significant characteristics ) as examples is the
detection of morphology in sentences, the classification of information and many
more.

The last type is "reinforcement" learning, here The machine is able to learn by
trial and error in a number of different situations and unlike the previous ones here
the machine knows the results from before but does not have a classification of
results more optimal, or ways to reach these, and what it does is to run the algorithm
progressively and it associates the success patterns, to repeat them over and over
again until they are perfected and become infallible, examples of this is the
navigation of a vehicle in automatic , decision making, etc.

Even so, there are other more complex approaches to more specific tasks, but
due to the superficiality of the work, it is not worth delving into them. and although
this discipline is relatively new, one must go back to the 19th century to find the
mathematical bases for it, an example is Bayes' theorem made in 1812, which
defined the probability of an event occurring based on knowledge of the conditions
previous that could be related to said event, in the future more specifically in the
40's, a series of scientists formed the basis of computer programming, capable of
translating a series of instructions into actions executable by a computer, this made
In the 1950s, as the mathematician Alan Turing said earlier, the seed of the creation
of 'artificial intelligence' computers was sown, which by the 1960s had its first
success, with the creation of artificial neural networks and one of its first successful
experiments. was carried out by Marvin Minksy and Dean Edmonds, scientists at
MIT, who managed to create a computer program capable of learning er of the
experience to get out of a maze.

so the first computer was created with the ability to learn by itself to solve a task
without having been created for it explicitly, but from the examples provided initially it
was learning, but despite this achievement, this same It demonstrated the
technological limits of the time which were the lack of data availability and the lack of
computing power, this made the problems that could be solved through the new
methods created and the systems very limited, this made This discipline was
introduced at a time called 'first winter of artificial intelligence', which was a series of
time the lack of results and progress made the academic world lose hope regarding
this discipline.

but at the end of the 90's and together with the consensus of the internet, the
massive amounts of information available to train the models increased and the
technological advance of the time increased the computing power of computers,
making this discipline see again the light, and for 1997 one of the most famous news
marked the rebirth of machine learning: the IBM Deep Blue system, trained based on
watching thousands of successful chess games, managed to defeat the maximum
world champion of this game, The Russian Garry Kasparov. The achievement was
possible thanks to 'Deep learning' which is a sub-discipline of maching learning of
the type - “reinforcement”, first described in 1960, which allows systems not only to
learn from experience, but also to be capable of training themselves to do better and
better using the data.

On the other hand, currently technology has advanced exponentially and this
discipline has also done so, there are many fields that benefit from maching learning,
such as online shopping, online advertising (advertisement to have more visibility
depending on the user). who visits the web), anti-spam filters have been taking
advantage of these technologies for a long time, in short, the practical field of
application depends on the imagination and the data that is available in the company
(as long as it is large enough) and a system powerful enough to process them.
These are some more frequent examples:

• Economic predictions and fluctuations in the stock market


• Detect fraud in transactions.
• Predict which employees will be more profitable next year (the Human
Resources sector is seriously betting on Machine Learning).
• Select potential customers based on behaviors on social networks,
• Predict failures in technological equipment.
• Predict urban traffic.
• 3D mapping and modeling
• Make medical pre-diagnoses based on patient symptoms.
• Classification of DNA sequences
• interactions on the web ...
• Change the behavior of a mobile App to adapt to the customs and needs of
each user.
• Detect intrusions in a data communications network.
• Know when is the best time to post tweets, Facebook updates or send
newsletters.
• Decide what is the best time to call a customer.
• Optimization and implementation of digital advertising campaigns

But although certainly the more data is supplied to it, the more efficient it will
take, in the same way, the human talent to perfect itself, since finally computers do
not have such a high command of the language applied to reasoning. Or what it is,
they are not exactly adept at determining contexts, which suggests that for maching
learning to be developed in other areas or improved in existing ones, people will
have to train these machines and gradually incorporate them into each of the
processes you want to fine-tune.

The present of these technologies are considered parts of the fourth industrial
revolution and it is not for nothing, since as it could be seen, they are capable of
doing things that were not previously believed possible, and many of the leading
companies in the market have chosen for creating their own versions of this
technology as it helps them lower financial costs because it lowers the human cost of
some tasks and increases efficiency

An example of this is the application that they currently have in sales and
personalized customer service, these are because they can more efficiently monitor
the tastes or the use of applications and products can offer better help or
recommendations, also in the stock market to predict better investments

Currently the biggest recent achievement that can be considered maching


learnig , was in the last year with the program AlphaStar to compete in games in the
STR video game of starcraft II, unlike other games already mentioned like the Go or
Chess, which although these are also strategy games are a little more limited than a
video game of this style ( Plus chess), since the movement in this type of game is
not only that of moving units, but also the of maintaining and obtaining financial and
material resources for the advancement of this, said software was able to win the
best player that Nickname “MaNa” has in reit times , and that this software went
through in-game training by only knowing the basic rules and a couple of strategies,
to understand more complicated strategies and forms of play , in so few months and
that the actions that could be carried out were limited to that of a human being, even
the Played r MANA calculated that made more than three hundred actuate s per
minute while the program alone did little more than two hundred , it narrows r that the
actions in this game are very important because among more actions per minute is
made more chance you have of win

But all progress has problems and in the case of these branches there are two
positions or two problems, the first is the use that can be given to these
technologies, since unlike a person they do not have values or ethical or moral
principles. .

one of the most recent and famous cases was that of deepfake, a technology
that bet on an AI for the falsification of videos in which some faces are exchanged for
others, or the lips are manipulated to make them say false things and it is almost
impossible to distinguishing with the naked eye or similar things, an example of this
was that of a text generator with AI called GPT-2 developed by openAI which was
not published because it was "Dangerous" since, according to what they found with
amazement , that the AI had learned on your own to create fake news or fake news.
One of the examples of the texts created was the following

" Recycling is NOT good for the world. It is bad for the environment, it is bad
for our health, and it is bad for our economy. I am not kidding. Recycling is not good
for the environment. It is destructive to Earth and it is a of the main causes of global
warming. Recycling is not good for health. It contributes to obesity and diseases
such as heart problems and cancer. Recycling is bad for the economy. It increases
production costs and, therefore, the price of everything that is made. Recycling is not
good for our nation. We pay a tremendous price for the privilege of having the most
advanced and efficient recycling system. Recycling is a huge, colossal waste of
time, energy, money, and resources . "

And although the data written by this AI is false, people are manipulable, a
computer does not need to rest and it can be doing this all day and publish it without
problems in several parts , for a long time and after a few months it can start There
will be people who really believe these statements and start to be against this issue
or others

Another example that generated controversy was that of a Microsoft AI called


Tay, which was a chatbot that used maching learnig technology and that learned as
it interacted with Twitter users, this AI had to be unsubscribed after just one day in
operation since the users had managed to make the software become racist,
xenophobic and homophone.

But another position on artificial intelligence that initially seems arguments for
one film, have great minds of our times such as scientist Stephen Hawkins , Bill
Gates founded Microsoft or the same Elon Musk is one of the founders of great
companies and technology organizations such as OpenAI or the factory T esla ,
which believe that the misuse of this technology is the least of the problems and that
the real problem is that the AI becomes too smart, and decides to do without us ,
something that the aforementioned Marvin Minsky already predicted in 1970: "it is
possible that, with luck, computers will decide not to have them as a pet"

The issue that intelligence s artificial is whether ethics s and secure s, it is a


debate very strong, people like Mark Zuckeberg that says not to be alarmist , and
others like the aforementioned Elon Musk or Jack Ma founder of AliExpress are
convinced that they can trigger the third world war.

But all is not bad l os benefits are countless , in fields as diverse and distant
as disease detection, discovery of cures, solutions to climate change, explores ment
spatial or solving mathematical problems, is beyond doubt is that the change
generate and is generating is more stronger than the internet telephony or mobile
Conclusions
Throughout the present investigation we were able to see how a subject that
was believed in this century, of our generation, is a subject that man has asked
himself since the philosophers have existed, but now is when they have greater
possibilities of seeing the light, but that Sometimes it can be disturbing, but we
should not be afraid of it
Bibliographies
● https://www.cice.es/noticia/historia-evolucion-la-inteligencia-artificial/
● https://www.salesforce.com/mx/blog/2017/6/Que-es-la-inteligencia-
artificial.html
● https://es.wikipedia.org/wiki/Historia_de_la_inteligencia_artificial
● https://www.bbvaopenmind.com/articulos/el-futuro-de-la-ia-hacia-
inteligencias-artificiales-realmente-inteligentes/
● https://www.sas.com/es_cl/insights/analytics/what-is-artificial-
intelligence.html
● https://es.wikipedia.org/wiki/Artificial_Inteligencia
● https://computerhoy.com/reportajes/tecnologia/inteligencia-artificial-
469917
● https://es.wikipedia.org/w/index.php?title=Alan_Turing&oldid=124772888
● https://es.wikipedia.org/wiki/M%C3%A1quina_de_Turing
● https://www.bbva.com/es/machine-learning-que-es-y-como-funciona/
● https://blog.adext.com/machine-learning-guia-completa/
● https://cleverdata.io/que-es-machine-learning-big-data/

Potrebbero piacerti anche