Sei sulla pagina 1di 3

1 Thinking may never submit it-

self, or may it?

What if I would decide today to publish a tutorial on how to use artificial in-
telligence, to create autonomous weapons? This knowledge will be captured on
the internet and its effects can never be reversed. A new door is opened where
everyone has access to this powerful tool of terrorism, and once opened it cannot
be closed again.
As technologies are developed, doors are opened with new use cases. Some of these
are beneficial, while others prove to be very destructive. For example, the creation
of dynamite was initially intended for clearing mines quicker, but also paved the
way towards more destructive weapons. Fundamental research in nuclear energy
gave rise to nuclear weapons. The responsibility for these negative side effects is
rarely put back towards the academic field.
The academic field as we know it, was created in the early 19th century and was
based on Von Humboldt’s theory of academic freedom. This theory mentions
how higher education must be free from economic, cultural and governmental con-
straints and must be free to share any knowledge. This was important in times
when the Catholic Church blocked research that contradicted their believes.
But still today the theory holds where universities may freely choose what they
research and publish. This approach relies on a great trust on universities to eth-
ically choose what is published. But publishing in many fields means that anyone
has access to it, while this may not always be necessary to achieve the positive
outcomes. Often only the positive outcomes are considered, the negative outcomes
are rarely seen as a reason not to publish. This because the first priority of aca-
demics is to quickly progress their field. Little incentive is given to slow down their
progress in order to control these negative outcomes.
In the past negative outcomes almost always had to happen first, before regula-
tions were put in place to control those. But now that technology has become
more powerful, we are coming to a point where only one occurrence of a negative
outcome may cause it to be never controllable again.
It is recognised how AI models can pose great danger when put in the wrong hands.

1
Models that use the wide amount of privacy sensitive data have shown to be very
effective at controlling opinions. The technology of Deepfakes uses AI to almost
perfectly replicate someone’s appearance and voice in a video. The technology
is made publicly available and poses a great treat to the trustworthiness of our
communication channels.
The risk is almost universally recognised by most machine learning experts and
some occurrences have already caused damage. However, regulations are still not
made to control these negative outcomes.
As the capabilities of a technology increases, the safety concerns with publicly re-
leasing the technology also increases. In the field of biotechnology, this has always
been a prevalent topic. With recent developments, it became possible to modify
the human genome for many health benefits. But even though it has the poten-
tial to cure many genetic diseases, it has been made illegal because the negative
outcomes would be uncontrollable. This is one of the few cases where regulations
actually restricted what the academic field may research.
In biotechnology these safety concerns have always been prevalent, but in computer
science research, there is little consideration for negative effects. While computer
scientists do not disrupt something as crucial as the human genome, they can cause
irreversible effects on society.
The recent rise in machine learning allows them to create far more powerful models
than ever before. Models that can be used for fraud, terrorism and oppression.
The well known machine learning researcher Stuart J. Russel also talked in his
book about the dangers that more powerful AI models may cause. Still, the field
of computer science always had a more open-source way of working. Believing how
all their research should be publicly available to enable quicker innovations. This
innovation driven view is not only seen in computer science, but in most scientific
fields. How long will it take until this reckless chase for innovation causes a nega-
tive effect that cannot be controlled?
The responsibility to deal with the negative effects is passed to the lawmakers,
but laws are only written after the event occurred and public outcry demands for
it. Which also requires the public to be informed, in order to request the needed
restrictions. An approach that often fails to stop the negative use of technology.
In order to better prevent these negative usages, it is needed to search for other
solutions.
We cannot only rely on the public. Active involvement from different scientific
disciplines is needed to consider the dangers of different technologies and suggest
appropriate measurements to control those.
Multiple approaches can be thought of to steer the usage of technology, one possi-
bility is to change the way how research is published. Research that can potentially
be used in a negative way can only be shared only to those who can prove they

2
use it in an ethical way. in order to make this work, scientists need to ethically
consider with whom they share their research. Along with this, there should be a
lot more public awareness around new technologies in order to promote the discus-
sion of what is ethical and what not. Finally lawmaker have the responsibility to
create laws in cooperation with the academic field and not wait for public outcry
to demand these laws.

Potrebbero piacerti anche