Sei sulla pagina 1di 10

AI-Based Deterrence in the Cyber Domain

Jim Chen
U.S. Department of Defense National Defense University, Fort McNair, Washington, USA
jim.chen@ndu.edu

Abstract: The cyber domain lacks deterrence, especially deterrence initiated from within its own domain. To bridge this gap,
deterrence by engagement and surprise was proposed as a cohesive addition to the existing deterrence theories such as
deterrence by denial and deterrence by punishment. In order to fully support this new type of deterrence initiated from
within the cyber domain, artificial intelligence (AI) can play a significant role. This paper takes a close look at the potentials
as well as the benefits that AI may bring to the deterrence space. It examines the use of these powerful capabilities in
achieving strategic advantages via unique operations. It reveals that AI-based deterrence is a real force multiplier. In addition,
the paper points out the limitations of this approach. It recommends ways of truly utilizing AI in supporting strategic goals.

Keywords: deterrence, artificial intelligence, capabilities, strategic advantages, force multiplier

1. Introduction
Deterrence is a significant strategy in warfare. Should it be successfully crafted and implemented, war can be
avoided or escalation of war can be prevented. It has worked well in the physical world, as witnessed in the Cold
War. Deterrence relies entirely upon unique capabilities. Without them, it is hard to image how deterrence
works. In the physical world, possessing nuclear weapons is one of the unique capabilities, as the disastrous
consequences of launching a nuclear weapon are too huge to be even imagined. Deterrence theories are built
upon three pillars: capabilities, credibility (or willingness to use capabilities), and communication (of this
willingness). Among these three, the pillar of capabilities is the key pillar. The other two rely solely upon this key
pillar.

The possession of unique capabilities can create stable deterrence, which in Payne (2008)’s term, “could be
orchestrated to proceed from mutual prudence born of mutual vulnerability” to unique capabilities so that both
sides have to “exercise self-control” because of the irreversible and disastrous consequences. Once a balance
from both sides is achieved, negotiation becomes possible. This point is well articulated in Schelling (1960), who
holds “unless we are ready for some kind of decisive showdown in which we either win all or lose all, we must
be as willing to ‘negotiate’ (by our actions) for limited objectives in terms of nuclear dominance, traditions and
precedents of nuclear use, and the ‘rules’ we jointly create for future wars, as for any other types of objectives
in limited war”. Specifically, the strategy of deterrence by punishment uses unique capabilities such as nuclear
weapons as a bargain to deter adversaries. It takes advantage of the deterring effects of uncertainty in creating
a stable balance of terror. Likewise, the strategy of deterrence by denial uses unique capabilities such as nuclear
weapons as strong backup support. This retaliatory second-strike offensive capability plus a strong defense and
resilience capability can scare adversaries away, thus achieving deterrence effects.

Unlike the physical world, the virtual world lacks virtual “nuclear weapons”. This has a negative impact upon the
effectiveness of in-domain deterrence. As a result, the implementation of the strategy of deterrence by
punishment in the cyber domain relies heavily upon the employment of cross-domain instruments, such as
diplomatic, economic, legal, military, and other non-cyber domain instruments. It is usually time-consuming to
use cross-domain instruments, as none of them can be used at cyber speed. What is more, extra time and efforts
are needed for collaboration in implementing cross-domain solutions. Now, the question is whether in the cyber
domain there are some unique capabilities that can be used to support deterrence. The following question is
how effective deterrence utilizing these unique cyber capabilities can be. These are the research questions that
this paper intends to address.

In order to operate at the cyber speed, AI is a natural choice. It can be employed to make a difference. In Allen
and Chan (2017), the transformative potential of AI is discussed in detail. They look at the implications of AI for
military superiority, for information superiority, and for economic superiority one by one. They observe that
researchers in the AI field have demonstrated significant technical progress over the past few years in the AI
sub-field of machine learning. They mention that “most experts believe this rapid progress will continue and
even accelerate”. They hold existing capabilities in AI have enabled “high degrees of automation in labor-
intensive activities such as satellite imagery analysis and cyber defense”; and “future progresses in AI has the

38
Jim Chen

potential to be a transformative national security technology, on a par with nuclear weapons, aircraft,
computers, and biotech”. Following this argument, it can be claimed that just like nuclear weapons serve as a
foundation for deterrence in the physical world, AI can serve as a foundation for deterrence in the virtual world.
However, AI has to be closely tied to the unique capabilities of cyberspace to be effective. Hence, this correlation
needs to be explored in research.

This paper is organized as follows: Here in Section 1, the theme of the research and its significance are
introduced. In Section 2, the challenges in cyber deterrence are discussed. In Section 3, the unique
characteristics of the cyber domain are discussed. A model that ties these unique characteristics to AI is then
proposed to create a unique capability in the cyber domain. In Section 4, further discussion is provided and the
benefits and limitations of this model are examined. In Section 5, a conclusion is drawn.

2. Challenges in cyber deterrence


Unlike in the physical world, deterrence does not work well in the cyber world. As well put by Shea (2017),
“Whereas we have a good idea how to deter a nuclear or conventional attack, to deal with crises in the
traditional domains, to employ arms control or confidence-building arrangements, we still do not have a good
idea of how to deter or respond to major cyberattacks, even when they are clearly designed to undermine our
governments or our political processes.”

Chen (2018) provides two reasons for this dilemma. He holds that one is the lack of in-domain deterrence in
cyberspace. The other is the lack of guidance for the implementation of cyber deterrence strategies. To deal
with the first challenges, Chen (2017) proposes a new deterrence strategy, i.e. deterrence by engagement and
surprise. This in-domain deterrence strategy helps to build a cyber deterrence continuum that integrates cyber
defense and resilience, cyber deterrence, and cyber retaliatory offense. With such a continuum, strategic depth
and operational flexibility can be created for cyber deterrence. To deal with the second challenge, Chen (2018)
proposes a new model of cyber deterrence implementation, which consists of different levels and categories of
operations as well as their corresponding levels and categories of retaliation. Different cyber deterrence
strategies are thus associated with different levels and categories. This approach brings contexts into the
decision-making process in selecting levels and categories of deterrence measures. It also intertwines an in-
domain deterrence strategy with out-domain strategies.

There are other reasons in addition to the ones mentioned above. The third reason is the lack of credibility due
to the difficulty in displaying in-domain deterrence capabilities, especially the effects of these capabilities in
cyberspace for retaliatory purpose. In Libicki (2018) ‘s term, “The policy question is whether the United States
should emphasize the possibility: by impressing adversaries with what cyberattacks can do, reminding
adversaries that the United States would be willing to do it, and investing in making cyberattacks more reliable
and even more forceful. Central to any answer is an assessment of how cyberwar might fit into an overall US
conventional deterrence posture.” After conducting an analysis, he draws the conclusion that “cyber capabilities
can make a contribution to the broader deterrence framework”; however, its contribution to deterrence “is
likely to be modest compared to the level of punishment cyberattacks promise or compared to similar kinetic
threats”. This assessment is based on two factors: (1) a deterrence scale, and (2) the highly uncertain effects of
cyberwar. The first factor indicates that it is difficult to make a retaliatory cyberattack “a just-right response,
neither too weak nor too harsh”. The second factor means uncertainty may not make adversaries really feel
painful, thus weakening the deterrence strategy implemented. Should these two factors be addressed, the level
of credibility can be increased, and this particular challenge can be correspondingly dealt with.

It is clear that generating a just-right response is hard, especially when it is done manually. Moreover, there are
a lot of dynamics in cyberspace, since cyber operations may be either below or above the threshold of armed
conflict, with most below the threshold. Furthermore, situation may change within a very short period of time.
The tension may be escalated or de-escalated. This adds another level of complexity for generating a just-right
response. Ideally, this dynamics should be managed in cyber deterrence. If a cyber attack is below the threshold
of armed conflict, the prompt and just-right response should be the one that is below the threshold. If a situation
gets escalated, the response should be changed accordingly. In other words, an adaptive and prompt response
should be provided. To ensure an adaptive and prompt response should not be a randomly selected one, a
process has to be established and followed. Within this process, all the possible alternatives are identified,
analyzed, compared, selected or even created to address a unique situation. Note that all these activities should

39
Jim Chen

be conducted within a very short period of time. Promptness, adaptability, and accuracy are the key for success
in the cyber domain. Obviously, to satisfy these requirements, a machine-learning process has an advantage
over a process manually manipulated by humans.

Uncertainty prevails in the cyber domain. As mentioned by Libicki (2018), “cyberattacks have effects that are
particularly unpredictable”. Apparently, it is very difficult for humans to figure out this puzzle. However, with an
AI-based systems empowered by a machine-learning process, the challenge of uncertainty may be addressed.

As shown above, there is a strong demand for a model of AI-based deterrence in the cyber domain. This model
should be able to manage promptness, dynamics, and accuracy in calculating, executing, and measuring impacts
of cyber operations, including cyber strikes. Meanwhile, it should be able to address the issue of uncertainty.

To design a machine-learning environment in the cyber domain requires a good understanding of this particular
domain, which, in return, requires a good understanding of the unique characteristics of this domain. In the next
section, the unique characteristics of the cyber domain are discussed. After that, a model that ties these unique
characteristics to AI is then proposed.

3. A model of AI-Based deterrence in the cyber domain

3.1 Unique characteristics of cyberspace


Cyberspace is a virtual space, built on the information and communications technology (ICT) systems, which are
used to support computation, human communications, and other services. The ICT systems offer fast and
automated services based on machine computation and transmission. Undoubtedly, speed is the first unique
characteristic of cyberspace. Sometimes it is referred to as cyber speed. Within cyberspace, different ICT systems
in different parts of the world are connected. Nowadays, with more and more physical systems, including critical
infrastructure systems and Internet of Things (IoT) systems, connected to the Internet, cyber operations
conducted within the virtual domain may cause serious effects in the physical domain, including physical damage
and destruction. Connectivity, both within the virtual world and between the virtual world and the physical
world, is the second unique characteristic of cyberspace. Besides, human authentication is an add-on component
in the cyber domain. By default, anonymity prevails initially. Unlike in the physical world, there is no direct
human identification capability in the ICT systems. To make up for it, authentication systems utilizing passwords,
smart cards, fingerprints, voice recognition are implemented. These systems are usually referred to as the
systems depending on something you know, something you have, something you are, and something you do. As
these are indirect mechanisms for human identification, adversaries or criminals may steal identities of other
people and log into an ICT system using stolen identities. Anonymity is the third unique characteristic of
cyberspace. In addition, a user who uses a network or internet connection and runs an operating system as well
as an application has no idea about how the networks are connected, what actual codes are running inside
systems, and what codes are executed for an application. It is common that an owner of a system is good at
utilizing the functionalities of a system but does not know what is exactly running inside the system as codes are
running and processed at low levels while human-machine interface is at a high level. All systems seem like a
black box to end users. Opaqueness is the fourth unique characteristic of cyberspace. In sum, speed,
connectivity, anonymity, and opaqueness are the key unique characteristics of cyberspace.

These unique characteristics have built unique environments in the cyber domain. When anonymity and
opaqueness are employed in deterrence or in offense, stealth operations come into being. Phishing attacks and
malware attacks are good examples in offense. Cyber attackers are actually taking advantage of these unique
characteristics. When speed, connectivity, anonymity, and opaqueness are all integrated together, surprise
effects can be generated as unexpected and painful responses coming from anywhere at any time within a
shortest amount of time with unknown means from unknown actors. For example, a perpetrator has just
launched an attack using a few computing devices, when he takes a break and checks his personal smart phone,
a warning message pops up on the device telling him what he has done is already known and if he does not stop
his attack right away he is going to charged. In this example, speed, connectivity, anonymity, and opaqueness
are all amalgamated into a retaliatory response. Undoubtedly, a deterrence effect can be achieved. Likewise,
the fusion of these unique characteristics can support intelligence collection as this can be done remotely via
connectivity within a short amount of time without revealing the identity of a collector. These unique capabilities

40
Jim Chen

are well discussed by Chen and Dinerman (2018). They include the capability of intelligence collection, the
capability of stealth maneuvers, and the capabilities of generating surprise effects.

There are two types of major actors in the cyber domain: humans and machines. Humans are good at designing
new systems and making decisions in complex and uncertain environments. Machines are good at providing fast
responses and performing repetitive activities. The fusion of these two types of actors in operations is the key
for success in the cyber domain. Also, assigning what machines are good at doing to machines helps to increase
efficiency; and assigning what humans are good at doing to humans helps to increase effectiveness. Putting all
these together, effective and efficient solutions can be generated.

3.2 Artificial intelligence (AI) and machine learning (ML)


AI intends to apply human intelligence into machines so that machines can be smart enough to perform tasks
that require human intelligence. Humans possess different types of intelligence. In Gardner (1983), discussed
are nine types of human intelligence: naturalist intelligence (nature smart), musical intelligence (sound smart),
logical-mathematical intelligence (number/reasoning smart), existential intelligence (life smart), interpersonal
intelligence (people smart), bodily-kinesthetic intelligence (body smart), linguistic intelligence (word smart),
intra-personal intelligence (self smart), and spatial intelligence (picture smart). Currently, AI is only good at
executing logical-mathematical intelligence, spatial intelligence, and some linguistic intelligence. Specifically, AI
is good at performing the following tasks: general object recognition, image processing, decision-making
support, natural language processing (including speech recognition and language translation), and robotics.
Other types of human intelligence are not well explored in AI yet.

Machine learning is a sub-field of AI. It makes machines smart enough to learn and improve independently
without human’s help. It is composed of supervised learning, unsupervised learning, and reinforcement learning.
Deep learning is a subset of machine learning. As explained in Arel, Rose, and Karmowski (2010), the subfield of
deep machine learning is motivated by the recent findings about neocortex. It “focuses on computational models
for information representation that exhibit similar characteristics to that of the neocortex”. It intends to reduce
dimensionality without using pre-processing.

Artificial neural networks are key mechanisms in machine learning. It provides deep learning. Schmidhuber
(2014) points out deep learning artificial neural networks are good at pattern recognition and machine learning.
“Shallow and deep learners are distinguished by the depth of their credit assignment paths, which are chains of
possibly learnable, causal links between actions and effects”.

3.3 A model of AI-Based deterrence in the cyber domain


To follow the basic principles, concepts, and designs in these approaches, a model of AI-based deterrence in the
cyber domain can be proposed. In this model, there are two major components. One is the component for
situation assessment, which identifies one or a few major issues. The other is the component for deterrence
response, which generates specific deterrence actions.

As any deep learning artificial neural network, the situation assessment component consists of an input layer,
several hidden layers, and an output layer. The high-level diagram of this component is displayed in Figure 1.

This diagram shows how an analysis is conducted within a deep learning artificial neural network. There are
multiple inputs received in the input layer. These inputs are mapped into different nodes in the first hidden layer
based on their distinctive features. Inside the hidden layers, with the help of supervised learning mechanisms,
unsupervised learning mechanisms, and reinforcement learning mechanisms, a compression or normalization
of the features can be achieved. After the fine-tuning process that addresses bias and assigns weights, one
feature or one set of features stands out among all the features. The inputs associated with this feature or this
set of features are thus selected and propagated to the output layer as the key issue or the key challenge that
should be dealt with.

Once the key issue or the key challenge is identified, the deterrence response component is activated. The high-
level diagram of this component is displayed in Figure 2 below:

41
Jim Chen

Figure 1: The situation assessment component

Figure 2: The deterrence response component


This component takes in the key issue and the key challenge provided by the situation assessment component.
It goes through the reasoning analysis, the contextual analysis, and the risk management analysis. All these
analyses are supported by the examination of the past experience, the scrutiny of current situations, and the
analysis of future impacts. The reasoning analysis enhances the understanding of the goal, the purpose, and the
reasons. The contextual analysis examines background, relationship, and other elements. The risk management
analysis looks at the advantages and disadvantages with the help of the cost-benefit analysis. These analyses
serve as the foundation for the selection of available actions in the supervised learning mechanism, the creation
of new actions in the unsupervised learning mechanism, and the identification of the similar patterns in the
reinforcement learning mechanism. With the available options pre-loaded and ready to be chosen, the
supervised learning mechanism makes prompt responses possible. With the capability of quickly creating new
options, the unsupervised learning mechanism makes customization and dynamics possible. With the rewards
for selecting the most suitable options, the reinforcement mechanism speeds up the whole selection process,
thus making speedy decision-making possible. Based on these analyses, relevant weights are assigned to the
selected actions, and these actions are prioritized. Finally, a response action or a set of response actions is quickly
selected for that specific environment. This makes it possible for a deterrence response to be prompt, relevant,

42
Jim Chen

forceful, and dynamic. This makes an adversary feel that the deterrence is customarily designed for him/her,
thus making the adversary overwhelmingly scared. This can generate everlasting impact on the adversary’s
psychology, thus making deterrence effective.

It is evident that this model is capable of generating prompt, customized, forceful, and dynamic deterrence
effects. The application of this model, the benefits and limitations of this model are discussed in the next section.

4. Discussion

4.1 Capability enabler


The proposed AI-based deterrence model in the cyber domain is able to enable the capability of intelligence
collection, the capability of stealth maneuvers, and the capabilities of generating surprise effects.

In Chen (2018), the model of cyber deterrence implementation is proposed. In this model, there are levels and
categories for operations and retaliation. The eight levels consists of strong defense and resilience (Level 0),
intelligence collection (Level 1), surprise operations or information operations (Level 2), diplomatic responses
(Level 3), economic responses (Level 4), offensive cyber-physical responses (Level 5), physical force responses
(Level 6), and nuclear force responses (Level 7). In this model, cyber operations that are only within cyberspace
can be conducted at Levels 0-2. Cyber operations that affect the physical world can be conducted at Level 3-7.

According to this model, intelligence collection occurs at Level 1. Here, open source intelligence can be
conducted. Specifically, the situation assessment component can help to extract data of intelligence value
among all the input data via feature mapping and other methods with the help of supervised learning,
unsupervised learning, and reinforcement learning. After receiving the inputs, the features of each input are
extracted, and each input is mapped to a node in the first hidden layer based on feature similarity. These similar
nodes are then put into a pre-defined category if they possess the same features that this pre-defined category
is classified with. This is handled by the supervised learning mechanism. They are put into a category just created
for them if their features do not match with the features of any per-defined categories available. This is handled
by the unsupervised learning mechanism. Rewards are provided for having suitable feature mapping so that the
best possible mapping is encouraged to be identified. This is handled by the reinforcement learning mechanism.
Using all these mechanisms, the categories of intelligence values can be separated from other categories. The
data inside these categories can thus be labelled differently. They are then sent to a storage location for
intelligence collection purpose.

Ideally, stealth maneuvers occur within the same category at the same level so that they can be relevant in
contexts and also can stay undetected. The deterrence response component can support these maneuvers. After
going through the reasoning analysis, the contextual analysis, and the risk management analysis, the goals and
the environments are well understood. Now, an optimal path from the inputs to output is identified. Here, the
approach of the feedback artificial neural networks is employed. Unlike the approach of the feedforward
artificial neural networks, in which “the signals move in only one direction from input to output”, this approach
allows the signals to move in both directions, as described in Alrajeh and Lloret (2013). Moving backward from
the output to the inputs, all the alternative options for a specific context become available again. Selecting an
option from this list or generating a new alternative can help to generate a relevant action/event that is hard to
be detected. Eventually, results of information campaigns or influence campaigns can thus be generated.

To generate surprise effects, actions/events selected or generated do not need to be within the same category
or at the same level. Based on the results of the reasoning analysis, the contextual analysis, and the risk
management analysis, the deterrence component can select or generate alternative actions/events from any
categories at any levels, especially the ones that may catch an adversary by surprise and make him/her feel the
severe pain. There are various automated ways of creating unexpected consequences. To keep changing the
weight assigned to actions/events may generate different outcomes, thus leading to different retaliatory
responses. This is just one way of achieving dynamics and unexpectedness. It takes advantage of uncertainty in
this space to achieve strategic advantage. Additionally, with the availability of various levels and categories, the
model proposed provides a huge maneuver space. As a result, these cross-category and cross-level
actions/events can generate deterrence effects.

43
Jim Chen

It is evident that the model of AI-based deterrence in the cyber domain can successfully support the unique
capabilities in cyber operations in an effective and efficient way. Taking advantages of AI, this model can provide
prompt, customized, forceful, adaptive, dynamic, and unexpected responses.

4.2 Significance and future study


The capabilities enabled by the proposed model can support both speedy and autonomous operations. These
capabilities are essential to modern warfare, as it provides competitive military advantage. As noted by Price,
Walker, and Wiley (2018), “the pace of warfare is accelerates”. Hence, “the speed of future military action will
incentivize increased reliance on complex autonomous analysis, recommendation, and decisionmaking” in order
to gain “informational and temporal advantage”. In one of the United Nations Institute for Disarmament
Research documents (2017), it is mentioned that “with increasing autonomy distributed in a battlespace, there
will be incentives to shorten time cycles between decision and action. This potential for a ‘flash war’ may be
highly destabilizing”. Obviously, under the pressure of speed, a response generated by a machine-learning
process has its advantage over a response manually generated by humans, as the former can keep pace with
the challenge of speed. The strength of this model is self-evident in this respect.

Besides, this model enables accelerated learning. As shown in Figure 2, a continuous feedback loop allows the
systems to quickly make adjustment in the selection of the best suitable response whenever a change in context
is detected. It also makes it possible for the system to identify the most popular response in a given context. The
manipulation of weight can also generate varied outcomes. These mechanisms make dynamics, flexibility, and
unexpectedness possible.

However, it has to be pointed out that bias may still exist in calculations. As a consequence, there are chances
of selecting or generating inappropriate actions/event in a specific context. In future study, new ways of
mitigating and reducing bias in calculation need to be figured out in order to further improve the effectiveness
of this model. To deal with this issue currently, humans have to be kept in the loop for supervision, validation,
and critical decision-making. The fusion of humans and machines in operations is the key for success in the cyber
domain.

In this section, both the advantages and disadvantages of this proposed model are discussed. An area for future
study is suggested.

5. Conclusion
In this paper, the potentials and the benefits that AI may bring to the deterrence space are explored. It reveals
that AI can serve as a foundation for deterrence in the virtual world to generate prompt, customized, forceful,
adaptive, dynamic, and unexpected effects. The proposed model of AI-based deterrence in the cyber domain is
a real force multiplier as it ties the unique characteristics of the cyber domain to AI. This model consists of two
major components. The situation assessment component provides analysis of the input data with the help of
the deep learning artificial neural networks. The deterrence response component supports decision-making and
recommends actions/events as deterrence responses. Together, these two components can successful support
the capability of intelligence collection, the capability of stealth maneuvers, and the capabilities of generating
surprise effects for cyber operations. This model can successfully address the two issues that Libibki (2018)
mentions. The AI and the cyber speed can produce well-calculated just-right deterrence responses. They can
also create highly uncertain deterrence responses to make adversaries really feel painful so that credibility can
be increased. Future study on methods of handling bias in calculation need to be conducted as it is a limitation
of this model. This approach truly utilizes AI in supporting deterrence, thus enriching deterrence theories as a
whole.

References
Alrajeh, N. And Lloret, J. (2013) Intrusion Detection Systems Based on Artificial Intelligence Techniques in Wireless Sensor
Networks, International Journal of Distributed Sensor Networks, Vol.2013. http://dx.doi.org/10.1155/2013/351047.
Allen, G. and Chan, T. (2017) Artificial Intelligence and National Security, Belfer Center for Science and International Affairs,
Harvard Kennedy School, Cambridge, Massachusetts.
Arel, I., Rose, D. and Karmowski, T (2010) Deep Machine Learning - A New Frontier in Artificial Intelligence Research, IEEE
Computational Intelligence Magazine, November 2010, pp.13-18.
Chen, J. (2018) “On Levels of Deterrence in the Cyber Domain”, Journal of Information Warfare, Vol.17, No.2, pp.32-41.

44
Jim Chen

Chen, J. and Dinerman, A. (2018). Cyber capabilities in modern warfare. In M. Lehto and P. Neittaanmäki (Eds.), Cyber
security: power and technology, pp. 21-30. Springer.
Chen, J. (2017) “Cyber Deterrence by Engagement and Surprise”, PRISM, Vol.7, No.2, pp.101-107.
Gardner, H. (2011) Frames of Mind: The Theory of Multiple Intelligences. (3rd edition). Basic Books.
Libicki, M. (2018) “Expectations of Cyber Deterrence”, Strategic Studies Quarterly, Vol.12, No.4, pp.44-57.
Payne, K. (2008) The Great American Gamble: Deterrence Theory and Practice from the Cold War to the Twenty-First
Century, National Institute Press.
Price, M., Walker, S., and Wiley W. (2018) “The Machine Beneath: Implications of Artificial Intelligence in Strategic
Decisionmaking”, PRISM, Vol.7, No.4, pp.93-105.
Shea, J. (2017) “How Is NATO Meeting the Challenge of Cyberspace?”, PRISM, Vol.7, No.2, pp.19-29.
Schelling, T. (1960) The Strategy of Conflict, Oxford, UK: Oxford University Press.
Schmidhuber, J. (2014) Deep Learning in Neural Networks: An Overview, Technical Report IDSIA-03-14, the Swiss AI Lab
IDSIA, University of Lugano & SUPSI, Switzerland.
United Nations Institute for Disarmament Research. (2017) “No. 6 The Weaponization of Increasingly Autonomous
Technologies: Concerns, Characteristics, and Definitional Approaches—a primer”, Retrieved from
http://www.unidir.org/files/publications/pdfs/the-weaponization-of-increasingly-autonomous-technologies-
concerns-characteristics-and-definitional-approaches-en-689.pdf

45
for the research group that was mainly involved in research for the South African National Defence Force and
Government sectors on Cyber Defence. Her research is focused on cybersecurity and government policy. She
has presented talks at national and international forums and has also been invited as keynote speaker at
international conferences.

Biographies of Contributing Authors


Dr Abdullahi Arabo is a Senior Lecturer in Computer Networks and Mobile Technology at the University of the
West of England. Dr Arabo is currently a reviewer for EPSRC grants/proposal application and British Council
Newton Fund applications. Dr Arabo has successfully lead the development of an MSc Cyber Security which has
been provisionally certified by the National Cyber Security Centre UK (NCSC), the public-facing body of the
GCHQ. Dr Arabo served as a faculty member of Oxford Internet Institute (OII) and was also a member of the
Oxford University Cyber Security Centre. In the past, he was employed as a Cyber Security consultant by BT and
a Research Fellow at the University of Oxford where a lead a research on the security of connected home
futures.

Stacey Omeleze Baror obtained her B.sc Hon and Masters in Computer Sci., from the Computer Science
Department, University of Pretoria. She is currently studying towards a Doctorate with the Computer Science,
University of Pretoria. Her research areas is focused on Public cloud, Digital forensic readiness in the cloud,
Cloud forensics, Cybercrime, Machine learning and Natural language processing techniques for cybercrime and
Information Security.

Peter Barrett is a manager and senior engineer with Carnegie Mellon University’s Software Engineering
Institute. A former US Marine, Peter has 15 years of experience in intelligence and cyber operations in support
of the US Department of Defense. His interests include cyber training, learning technologies, and large-scale
cyber exercises.

DSc Darya Bazarkina is a professor at the Department of the International Security, Russian Presidential
Academy of National Economy and Public Administration, and a senior researcher at the School of
International Relations, Saint Petersburg State University, Russia. Darya is an author of more than 70
publications on communication aspects of the counter-terrorist activity.

Lieutenant Clint Bramlette is a Master’s student at the Air Force Institute of Technology studying Cyberspace
Operations. Previously, he worked in cybersecurity at the National Air and Space Intelligence Center. He has
Bachelor’s in Wireless Engineering from Auburn University.

Kimo Bumanglag is a cyber security engineer and exercise developer at Carnegie Mellon University’s Software
Engineering Institute. He has spent over ten years working with the Defense Department in the field of cyber
operations. He is also a doctoral student at Dakota State University.

Peter Chan is a researcher at the Council for Scientific and Industrial Research (CSIR) South Africa, working in
the Cyber Operations group within the Defence Peace Safety and Security department. His research interest
are network simulation, security and formal methods of computing.

Dr. Jim Q. Chen, Ph.D. is Professor of Cyber Studies in the College of Information and Cyberspace (CIC) at the
U.S. National Defense University (NDU). His expertise is in cyber warfare, cyber deterrence, cyber strategy,
cybersecurity technology, artificial intelligence, and machine learning. Based on his research, he has authored
and published numerous peer-reviewed papers, articles, and book chapters on these topics. He has also taught
courses on these topics.

Mercy Chitauro is a lecturer in the department of Computer Science at Namibia University of Science and
Technology. She is a PhD candidate in computer science at the same university. Her main research areas are in
cyber security, critical infrastructure security, network security and child online protection.

Captain Luis Cintron is an engineer pursuing a master’s degree at the Air Force Institute of Technology, USA.
Captain Cintron has a B.S. in computer engineering from the University of South Florida. His previous

x
Reproduced with permission of copyright owner. Further reproduction
prohibited without permission.

Potrebbero piacerti anche