Sei sulla pagina 1di 39

Chapter 7

Artificial Moral Cognition: Moral Functionalism


and Autonomous Moral Agency

Don Howard and Ioan Muntean

Abstract This paper proposes a model of the Artificial Autonomous Moral Agent
(AAMA), discusses a standard of moral cognition for AAMA, and compares it with
other models of artificial normative agency. It is argued here that artificial morality
is possible within the framework of a “moral dispositional functionalism.” This
AAMA is able to read the behavior of human actors, available as collected data, and
to categorize their moral behavior grounded in moral patterns herein. The present
model is grounded in several analogies among artificial cognition, human cognition,
and moral action. It is premised on the idea that moral agents should not be built
on rule-following procedures, but on learning patterns from data. This idea is rarely
implemented in AAMA models, albeit it has been suggested in the machine ethics
literature (W. Wallach, C. Allen, J. Gips and especially M. Guarini). As an agent-
based model, this AAMA constitutes an alternative to the mainstream action-centric
models proposed by K. Abney, M. Anderson and S. Anderson, R. Arkin, T. Powers,
W. Wallach, i.a. Moral learning and moral development of dispositional traits play
here a fundamental role in cognition. By using a combination of neural networks
and evolutionary computation, called “soft computing” (H. Adeli, N. Siddique,
S. Mitra, L. Zadeh), the present model reaches a certain level of autonomy and
complexity, which illustrates well “moral particularism” and a form of virtue ethics
for machines, grounded in active learning. An example derived from the “lifeboat
metaphor” (G. Hardin) and the extension of this model to the NEAT architecture (K.
Stanley, R. Miikkulainen, i.a.) are briefly assessed.

D. Howard
The Reilly Center for Science, Technology, and Values, University of Notre Dame, Notre Dame,
IN, USA
e-mail: dhoward1@nd.edu
I. Muntean (!)
The Reilly Center for Science, Technology, and Values, University of Notre Dame, Notre Dame,
IN, USA
Master of Liberal Arts and Sciences Program, University of North Carolina, Asheville, NC, USA
e-mail: imuntean@unca.edu

© Springer International Publishing AG 2017 121


T.M. Powers (ed.), Philosophy and Computing, Philosophical Studies Series 128,
DOI 10.1007/978-3-319-61043-6_7
122 D. Howard and I. Muntean

Keywords Artificial autonomous moral agent • Moral-behavioral patterns •


Soft computing • Neural networks • Moral cognition • Moral functionalism •
Lifeboat ethics • Moral particularism

7.1 The Ethics and the Epistemology of the Artificial


and Autonomous Moral Agent (AAMA)

How much can we know about the world? How limited is our ability to represent it?
How special are we, human beings? How limited is our ability of making the world
a better place? One of the most important aims of philosophy is to inquire about
the limits of our “given” human condition. Philosophy scrutinizes the universality
and uniqueness of our knowledge, power of understanding, freedom, creativity and,
ultimately, morality.
Ethics and epistemology, it is assumed here, are among the most active areas
of philosophy. While the core tenets of ethics might change at a slower pace, the
dynamics of the social, religious, and cultural structures, the rapid transformation
of new technologies, and scientific discoveries, all create new perspectives on
applying ethics to everyday life. Some talk about a computational turn in traditional
epistemology, and refer to the computational model of the mind, at least. This
paper assumes that both ethics and epistemology will include new directions of
research on artificial agents (moral, cognitive, social, etc.), similar to philosophical
investigations in Virtual Reality, Artificial Intelligence, nanotechnologies, etc.
One way to extend our given morality is by “human enhancing”: improving our
given faculties beyond their natural limits. Săvulescu and Malsen recently argued
that an artificial system that can monitor, prompt, and advise our moral behavior,
“could help human agents overcome some of their inherent limitations” (Săvulescu
and Maslen 2015). This paper explores philosophically another path to extending
moral cognition and agency beyond ourselves, and addresses this question: can we
apply ethics, “as we know it,” to non-individual agents—human or non-human
alike?1 Can groups of people, companies, governments, nations, military units,
highly intelligent animals, be moral agents? What about artificially created agents:
computers, algorithms, robots, etc.? Can they be moral agents? Enters “machine
ethics” (aka “computational ethics”), a discipline that addresses this latter question.2

1
One can assume that moral agents can be of a human nature, without being individuals: groups,
companies, or any decision-making structure (e.g. the military line of command) or possess
individuality, without being humans: animals, artificial agents, supranatural beings, etc.
2
There are two areas of research germane to the present analysis: “ethics of emergent technologies”
and “machine ethics.” The former has focused mainly on the moral implications, both intrinsic
and extrinsic, of the dissemination of technologies in the society. In most cases, the ethical
responsibility belongs to humans (or to human groups or communities): they are the moral
decision-makers and the responsibility bearers: e.g. the field of “robo-ethics” focuses upon the use
of robots and the societal concerns in the deployment of robots as well as the impact on our system
of values (Wallach 2014). The ethics of technology has to be complemented with another, more
7 Artificial Moral Cognition: Moral Functionalism and Autonomous Moral Agency 123

The present proposal is foremost a philosophical effort: seeking some human


faculties, moral cognition included, in “other” beings, which is not an empirical
issue, and cannot be solved only by anthropological, or sociological, or historical,
investigations. The prospect of artificial moral agents moves the discussion well
beyond actual moral agency, into the realm of possible moral agency, including
those agents that we can in principle construct. Designing artificial moral agents is
a highly interdisciplinary endeavor, as it involves artificial intelligence, psychology,
cognitive science, philosophy (with a swarm of issues from philosophy of mind,
ethics, and metaphysics), together with the ethics of emergent technologies. Moral
cognition is taken here as a central component of moral agency, such that the
argument for artificial agency becomes entwined with claims for the possibility, and
the plausibility, of artificial moral cognition. “Dispositional moral functionalism” is
discussed in some details, together with a form of virtue ethics, to propose a concrete
artificial moral agent that meets some minimal desiderata from the perspective of
ethics, cognitive science, and moral psychology. The model uses an “agent-centric”
approach and a form of virtue ethics based on dispositional traits; unlike most of the
known models of artificial morality, this model is case-based, rather than rule-based.

7.1.1 A Two-Dimensional Analysis: Complexity and Autonomy

What makes machine ethics philosophically enticing? One reason is that artificial
agents need to be simultaneously autonomous and complex: henceforth, the cluster
of issues in epistemology, and metaphysics, and ethics. The field of “machine
ethics” is determined by the interaction machines have with humans, animals, the
environment, groups of humans, etc. On the one hand, there is an inner complexity
of the machine itself (with perhaps a simple interface with the humans), and on
the other hand the complexity of the machine-humans interaction (with perhaps
a simple inner structure). It is the latter, and the complexity of the sociotechnical
system that is more important than the complexity of the machine itself.3 Another
dimension of machine ethics is autonomy. Artificial agents are supposed to be
both complex and autonomous: when autonomy and complexity are combined, the

balanced human-machine interaction, which has become relevant relatively recently: the ethics of
the actions and decisions of machines, when they interact autonomously, mainly with humans. This
relatively new area of study is “machine ethics,” which raises new interesting ethical issues, beyond
the problems of emergent technologies (Allen and Wallach 2009; Abney et al. 2011; Anderson and
Anderson 2011; Gunkel 2012; Trappl 2013, 2015; Wallach 2014; Pereira and Saptawijaya 2016).
It enquires the ethics of our relation with technology, when humans are not the sole moral agents.
3
We prefer to talk here about complexity as a measure of machine’s interaction with humans, the
environment, etc. and not as its a feature per se. The problems with technology, Wallach writes,
“often arise out of the interaction of those components [specific technology, people, institutions,
environment, etc.] with other elements of the sociotechnical system” (Wallach 2015, 34).
124 D. Howard and I. Muntean

predictability, the deterministic behavior, and the mechanistic decomposability of


the systems are different, and some traditional ideas about systems are challenged.
In the last two decades or so, we have had an increasingly complex relation with
emergent technologies: the Internet, personal computers, smartphones, new repro-
ductive technologies, stem cell research, synthetic biology, Virtual Reality, and,
more relevant here, advanced intelligent robots. As artificial agents will presumably
accomplish more and more complex tasks: driving cars, flying airplanes, fighting
in the battlefield, recognizing not only voices and faces, but human emotions and
expressions, caring for elderly or disabled people, trading high-frequency stocks,
or, in a more or less distant future, performing operations on humans at the nano-
scale (nano-medicine), these machines interact more significantly with humans. In
science or technology, we will gradually delegate predictions of complex processes
to machines (e.g. weather models, the control of nuclear systems, unmanned
missions to distant planets, etc.), or even the decision-making procedure about
complex situations.
The complexity of all these systems is not the only factor that has philosophical
implications: it is their autonomy that bothers us the most. Autonomy is much
harder, if not impossible, to transfer from humans to machines, or to delegate to
machines. While the complexity of our interaction with artifacts can increase with
no expectable limit in sight, some of us would want to limit their autonomy. For
some specific systems, machines should be able to make some decisions, but an
ultimate control over what a machine can do by itself is needed. Many would keep
the degree of machine autonomy low, especially for lethal weapons, when human
lives are at stake. For example, one position, advocated by N. Sharkey i.a., is to ban,
legally, any autonomous machine with lethal power (Sharkey 2012).4 The opposite
view of R. Arkin is to argue that lethal robots are much better than human soldiers
when it comes to avoiding unwanted collateral casualties in war (Arkin 2013). Some
recent work has recognized the need to push the discussion to a more analytical level
and to dig deeper into the permissibility of autonomous lethal weapons (Strawser
2013; Enemark 2014; Galliott 2015).

7.1.2 The Multi-faceted AAMA Ideal

The backbone of this paper is the analogy between general intelligence and moral
cognition. Complexity, learning, adaptation and autonomy, play central roles in most
recent definitions of “general intelligence” (Legg and Hutter 2007). This paper
places them at the core of machine ethics. It is sometimes assumed, especially in
the line of pragmatism about ethics, that intelligence and moral competence are
not independent: they are rather correlated, as they have coevolved in the history
of humankind (Flanagan 2007; Kitcher 2011). As some have argued, there is no

4
See the proposal for banning fully autonomous weapons by the Human Rights Watch (Human
Rights Watch 2013).
7 Artificial Moral Cognition: Moral Functionalism and Autonomous Moral Agency 125

special moral faculty outside the general cognitive faculties (Johnson 2012). There
is arguably an analogy between intelligence and moral cognition: they are aspects
of intelligence with counterparts in morality and the other way around; or, put it
strongly, morality and cognition are two aspects of the same faculty of agents:
natural and artificial alike.
Nevertheless, something more than computational capabilities are desirable
for artificial agents, namely, a minimal set of capacities to decide about ethical
matters (Allen and Wallach 2009). For Allen and collaborators, there is an urgent
requirement to extend moral reasoning to artificial agents, given the demand and
supply of such machines, and their political and legal consequences. Now we are
on the cusp of a new era of technology, when it is necessary to include ethics
modules in a wide range of autonomous systems. Now is the right time to address
the problem of artificial morality and machine ethics (Wallach 2015). A timely
philosophical discussion of these issues is preferable to the “wait-and-see” attitude,
or to a posteriori analysis, after some of these technologies have been adopted by
the society.
The AAMA is a complex artificial agent endowed with autonomy and the
ability to make ethical decisions.5 A working definition of an autonomous agent
would suffice here, as it is hard to define an artifact that does not exist yet.6 This
paper refers to an agent as generally as possible: it is an entity that is part of
the real world and/or part of a computationally constructed world. It can retain
a form of individuality, or, on the contrary, it can be a group or an aggregate of
individuals (artificial or not). Most agents are “complex adaptive systems” which
include reactive units, subsystems capable of reacting to changes in the environment.
One element useful in delineating the AAMA is adaptation; it makes decisions by
adapting to new data and to previous experience. It needs to sample an environment
and then select the best response to the changes in it. This is the “action-perception
cycle,” such that the life of the autonomous agent is a succession of sampling,
attending and action. Autonomy here is defined as a dynamical concept: what is
emphasized here is the independence from human decision-makers and from rules
pre-imposed by the programmer.
The present model is delineating from other AAMA models by using the
concepts of learning and adaptation: autonomy is acquired here through a devel-
opmental process. A collection of data D represents an assessment of moral
interactions among humans, or their interaction with animals, the environment, parts
of human body, etc., if one concedes that such interactions involve moral aspects.
This is similar to the supervised learning used in machine learning: the AAMA
learns the “mapping” from a set of inputs Xi to an output Yi , from an initial training
set Tr belonging to a set of moral data D. The model is then validated against

5
The literature refers to the autonomous moral agent and the artificial moral agent under the
acronym AMA. AAMA refers here to the “artificial autonomous moral agent.”
6
For a comprehensive formal definition, albeit somehow dated, see (Franklin and Graesser 1997).
126 D. Howard and I. Muntean

another subset of validation data V and tested against a test Te, subset of data D.7
The definition of an autonomous moral agent may also include the concept of active
moral learning, as the part of moral cognition. In this model, which is fundamentally
agent-based, as opposed to more action-based models, there is another component
of the AAMA that plays a central role: each AAMA has a set of dispositional traits,
or possible behaviors, which play a role similar to “dispositional moral virtues.”

7.1.3 The Natural-Human-Artificial Feedback

Another aim of this project is to better understand our own practical reason and,
hopefully, to improve and overcome its limits. If non-human moral agents exist,
designing them is germane in addressing meta-ethical questions about the plurality,
the limits, and the possibilities of our ethical systems. Building an artificial moral
agent is such a challenge: similar to approaches in philosophy of mind, traditional
ethicists need to rethink some concepts, when facing the new framework of
artificial moral and autonomous agents. Let us call this the “natural-human-artificial
feedback.” The process of understanding and explaining human morality includes
the “natural” (our biology, including evolution, inheritance, both at the level of
species and the individual), the “human” (our psychology, sociology, the political
and the legal systems) and, according to the hypothesis conjectured here, the
“artificial”: the models and the concrete realization of AAMAs. The developments
in machine ethics, including different AAMA models, would transfer knowledge
into areas such as ethics (including meta-ethics) and epistemology and shed light on
our own moral ability and competency. Human science is reflexive, as we inquire
ourselves (Flanagan 2007, 124). Morality does not need to be exclusively reflexive,
though: questioning the possibility of the “other,” the “different” moral agent is yet
another way of understanding upon our own morality. Arguing for or against the
“non-human, non-individual” moral agent, and ultimately building AAMAs using
our own abilities of modeling moral human agents expands our knowledge about
the intricacies of our own ethics. Machine ethics accounts for how ethics per se can
be extended and how this generalization reflects on our moral autonomy, virtues and
agency.

7
This model does not explore the unsupervised learning, deep learning, the reinforcement moral
learning, which are all equally promising. Supervised learning is less exploratory in nature, and
more related to discovery of patterns in data than to exploring data. See recent developments on
machine learning in Big Data (Bishop 2007; Murphy 2012; Liu et al. 2016; Suthaharan 2016).
7 Artificial Moral Cognition: Moral Functionalism and Autonomous Moral Agency 127

7.2 Some “no-go” Positions Against Machine Ethics

The question about non-individual or non-human agents which can act morally is
an age-old philosophical conundrum.8 The perspective of building artificial moral
agents, truly autonomous, the ideas of competing with such agents, delegating
some of our moral actions, and even the unlikely possibility of being replaced by
them in a post-human world are all daunting enough. One position follows the
lines of a “precautionary principle” and takes a counteractive stance: we ought
to postpone the development and the deployment of the AAMA technology, even
without any evidence of harm to the society; the uncertainty about the consequences
of a technology is enough.9 This “precaution attitude” does not rule out that in
principle autonomous moral agents are possible, but raises extrinsic objections
to developing and adopting such a technology based on social, political, and
economical considerations.
More radical than this precautionary attitude is a plethora of “no-go” positions
that raise strong intrinsic objections to AAMA, well beyond its consequences for
society. One central strategy of the “no-go” position is to postulate a set of internal,
and “inalienable” features of humans that are necessary to moral agency. The
defender of the “no-go” internalism (in respect of the human mind) argues that
the minimal mental content needed for morality includes, but is not restricted to:
free will, personhood, mental states (beliefs, desires), intentionality, consciousness,
etc. which are held not to be present in machines, animals, institutions, aliens,
early human species, etc. It is notoriously difficult to represent in algorithms
values, affects, emotions, empathy, or the quality of interiority. In the same way,
artistic creativity, religious experience, imagination, political or legal deliberation,
entrepreneurship in science, technology, politics, etc. are deemed as purely human
capacities. For the no-go defender, there no such a thing as “moral autonomy” for
non-humans and no moral decision making process outside the individual, human
agent; maturity, well-being, and moral education, are other necessary conditions for
the reliable ethical reasoner and actor.

8
Aristotle, Thomas Aquinas, Descartes, Pascal, Kant, and some contractarians such as Hobbes,
Gauthier and Rawls talked about the morality of non-human, or non-individual agents. Probably
a threshold concept is the idea of “just institutions,” upheld by several contractarians during the
Enlightenment period, who showed why sometimes social structures can be “just” or “fair.” When
Rawls introduced the two principles of justice, he added that the term “person” on some occasions
means “human individuals,” but may refer also to “nations, provinces, business firms, churches,
teams, and so on. The principles of justice apply in all these instances, although there is a certain
logical priority to the case of human individuals” (Rawls 1958, 166). Rawls, Gauthier, and, as
we’ll see, Danielson, require from the moral agents to be rational. We have in mind here the
“constrained maximization theory,” where to be a rational agent means that the moral constraint
must be conditional upon others’ co-operation (Gauthier 1987).
9
A version of the precautionary principle is: “Where an activity raises threats of harm to the
environment or human health, precautionary measures should be taken even if some cause and
effect relationships are not fully established scientifically” (Wingspread participants 1998). See a
discussion on precautionary principle about nanotechnology in Clarke (2005) and Allhoff (2014).
128 D. Howard and I. Muntean

Other ilk of arguments is based on external factors (relative to the human mind).
First, one can assume a type of material bio-centrism about normative agents:
normative agents need to be biological beings, made of cells (carbon-centrism)
and with a nervous system (neuro-centrism). Alternatively, one can adopt “evo-
centrism,” which argues that agents who are able to act morally need to be the
result of a natural process of evolution and then postulates some emergence of
the personhood, consciousness, free will, etc. from evolutionary processes. Or
normative agency emerges from evolution or presumably from social interactions
between evolvable individuals. In all these cases, AAMA need to be the result of
the process of evolution, sociality, culture development, etc. which is very hard to
implement in machines.
Yet another no-go position against AAMA is based not on human faculties, but
on the ability of super-intelligence machines to resist the process of imposing moral
values upon them: skeptics talk about the problem of “value-loading” in computer
science (Dewey 2011; Bostrom 2014). At the beginning of AAMA research, it has
been suggested, we may need to hard-code ethical principles, values and norms
in machines. As their intelligence increases in time, for instrumental reasons,
the system would progressively resist any attempts to hard code anything that
contradicts its own final values. If, by the time it reaches a certain level of self-
reflective intelligence, the AI agent is not “friendly” to humans, it will “not take
kindly to a belated attempt at brainwashing or a plot to replace it with a different
agent that better loves its neighbor” (Bostrom 2012, 187).

7.2.1 “Mind-Reading” Versus “Behavioral-Reading”


Moral Agents

Independently, and probably stronger that “evo-centrism,” another “no-go” position


can be couched in terms of “psycho-centrism”: normative agents need to have a
mind and to represent another mind in order to be moral agents. Some psychologists
have argued that the ability to read the mind of others and represent the mental states
of other agents is a component of altruistic motivation. The “psycho-centrist” claims
that a moral subject as well as a moral object are always needed. Gray, Young, and
Waytz claim that the essence of morality is “mind perception,” and used a “moral
dyad” as the definition of morality: “the essence of moral judgment is the perception
of two complementary minds—a dyad of an intentional moral agent, and a suffering
moral patient” (Gray et al. 2012, 101). The templates of moral judgments are filled
with the dyads in the same way in which we “fill in” missing visual information:
suffering, intentionality, moral intentions, etc., are filled in, albeit in reality they
do not exist. In a physicalist interpretation of the perception model of the mind,
“the mind” may be just a prop for a set of neurological processes, and the moral
agent acts as if there are there two minds. The argument from psychology can be
used against AAMAs, as the concept of a “computational mind” and “computational
7 Artificial Moral Cognition: Moral Functionalism and Autonomous Moral Agency 129

representation of a mind” are both problematic. In the machine ethics literature,


some have argued that a form of mind-reading is needed for any AAMA (Bello and
Bringsjord 2013).
The present paper can be read as aiming to qualify, and moreover reject, some
of the “no-go” positions: bio-centrism in its strong form is refuted through “moral
functionalism,” and the evo-centrism is explicitly used in the computational model
of the AAMA. The “mind-reading” model is replaced with a “behavioral (moral)
reading.” One problem with the dyad-based psycho-centrism is that it entails that
superior animals or young children can be moral agents in so far they mind-
read others: understanding is therefore a necessary component of mindreading
(Michael 2014). This is not backed up by empirical research: in general, children
and animals do not mind-read adults.10 The present AAMA model moves from
the “mind-reading” toward a weaker “behavioral-reading.” The agent here is any
entity that had developed the skill of detecting behavioral patterns, conceptualizing
and categorizing them, and then employs those skills to explain the behavior of
humans (Monsó 2015). As the next sections will stress, the behavioral reading
of moral patterns from data is more feasible and replicable (computationally and
conceptually) in AAMAs.11
Another no-go position is that even if AAMAs are conceptually possible, they are
not ethically desirable (Tonkens 2009, 2012). The pessimist asks whether, even if we
can build AAMAs, they are morally, socially or culturally desirable: do they bring
social justice, equality, fairness, in society? Sometimes autonomous lethal weapons
are compared to biological weapons: we could develop them as weapons of mass
destruction, but we have all the reasons to think they are not desirable to humankind.
Some may suspect together with Tonkens that even if we can build AAMAs, and
even if they are not so dangerous as weapons, they won’t help us promoting social
justice by reducing poverty, unemployment, discrimination, and inequalities in our
human society.
Finally, yet another no-go argument purports to show that morality is of a
different kind than other human activities. Social skills: social learning, social
empathy, etc. can be the difference makers in the human/artificial rift of agency.
Moral decisions are not always individual enterprises: the process of developing
moral awareness, then moral competency, and finally moral expertise, all require
sociability. Social collaboration and autonomy are not mutually exclusive: on the
contrary, social interactions can improve individual decisions. If social skills are
needed for moral agents, how can artificial agents have them? Can an artificial
agent learn from another, the way we learn from our peers? Can AAMAs reason,
deliberate about ethics, and choose the optimal moral outcome in the same way
in which AI agents find the optimal solution to a chess problem, to the problems

10
Some would simply replace mind-reading with a weaker condition such as empathy or sympathy.
(Hoffman 1991; Nichols 2001).
11
The behavioral-reading agents include animals, non-individual agents and, as we argue here,
some AAMAs.
130 D. Howard and I. Muntean

of parking a car in a tight space, of landing an airplane in difficult condition, or


diagnosing a disease? As Floridi and Sanders (2004) suggested, there is a continuum
of agency between the human and the artificial, including morality. Some no-go
arguments reject the social and distributive nature of moral agency in artificial
agents.

7.3 Resisting the “no-go” Stance

Weakening some of the conditions imposed on moral agency would generate


different models of AAMA. In this implementation, we do not address directly the
internalism no-go, but take some clues from other arguments against AAMA. This
AAMA is an autonomous and collective agent, with a distributed responsibility,
similar to other cases of “distributed agency” (be it political, social, or military).
This AAMA is based on learning and developing moral skills, together with a
mechanism of evolution: humans are the trainers and ultimately the “evolvers” of
a population of AAMAs. They make decisions about the quality of artificial agents
at the level of training and accomplishing tasks. Pace Tonkens, the present paper
is not an attempt to circumvent the precautionary stance and replace it with an
overly optimistic perspective. On the contrary, one should try to understand better
the foundations of AAMA and assess the risks before deploying them on a large
scale.
The present proposal is an AAMA model that comes with idealizations, abstrac-
tions, implicit assumptions, and background knowledge, all of which add a degree
of imperfection. It is nothing more than one among other contending attempts to
build AAMAs, some perhaps being more successful than this one (Litt et al. 2008;
Wallach et al. 2010; Anderson and Anderson 2011; Sun 2013; Trappl 2015). As
philosophers, we want to explore the possibility and ultimately the plausibility of
creating and developing AAMAs, beyond what is provided in the sci-fi literature and
movies. One aim of this paper is to argue that some no-go claims are not warranted.
The commonsensical, pre-theoretical, no-go positions are simply not robust enough,
and might be the results of a visceral fear of artificiality. Going beyond the spectacle
of Artificial Morality as depicted by Hollywood and mainstream media, where the
no-go protestations loom large, philosophers, engineers, scientists, i.a. want to know
whether artificial agents, able to act morally, with or without a human supervisor,
are in principle possible, and, ultimately, desirable to society. Therefore, a minimal
AAMA model is proposed: the current project assumes as little as possible about the
true nature of human morality, and enquires into the replicability and the scalability
of these features outside this model (Howard and Muntean 2014; Howard and
Muntean 2016). Here are three leading questions one can ask in machine ethics:
Q-1 Is there a way to replicate the mechanisms of normative reasoning (and moral
judgments) in the AAMA?
7 Artificial Moral Cognition: Moral Functionalism and Autonomous Moral Agency 131

Q-2 Is there a way to replicate human moral justifications and normative


explanations in the AAMA?
Q-3 Is there a way to replicate the moral behavior of human moral agents in the
AAMA?
Basically, Q-1 raises the issue of a causal and mechanistic replicability of
mental and psychological mechanism of moral judgment, Q-2 insists on the rational
and deductive aspect of morality, whereas Q-3 refers to the external, behavioral
replicability of moral actions. A grand project would attempt to replicate all these
in one AAMA. As a preliminary reaction to the no-go claims, one can take the
behavior of humans as a sufficient representation of morality: when comparing
the results of humans and AAMAs, one looks for “similar enough” outputs, for the
same input. In this spirit of a prospective and minimal model, only Q-3 is explicitly
addressed here. The focus is on moral behavior, the way it is learned and how moral
agents develop moral expertise. These are more relevant to AAMAs than the way
humans justify their moral actions.12
Learning, developing the ability to categorize cases, and perfecting them through
a trial and error mechanism, are strikingly similar to the recent advancements in
“machine learning.” The proposed solution here is to explore an analogy between
the machine learning and machine ethics. This leads to a computational “minima
moralia” for AAMA, morphed from the computational tools available. This
model is “black-boxing” the convoluted moral mechanisms or moral rationality,
circumvents Q-1 and Q-2, and addresses exclusively Q-3.13
Those who reject the present strategy of black-boxing argue that in order to
have an AAMA we need to understand better the psychology and, ultimately, the
neuroscience of morality. Allen and Wallach write that the AAMA implementation
“highlights the need for a richer understanding of human morality” (Allen and
Wallach 2009, 216). Bello and Bringsjord add that “machine ethicists hoping
to build artificial moral agents would be well-served by heeding the data being
generated by cognitive scientists and experimental philosophers on the nature of
human moral judgments” (Bello and Bringsjord 2013, 252). Hence, for these
authors, the simulation of the moral system is close to an emulation of the cognitive
mechanism and addresses properly issues in Q-1. Another AAMA model which is
not based on black-boxing is ANDREA, a “psychologically and neurally realistic
model” of moral decision making based on affective neuroscience and reward
mechanism (Litt et al. 2008).

12
The concepts of “moral judgment,” “moral reasoning,” and moreover “moral justification” are
harder to capture by any AAMA model. One requirement of AAMA is to justify its actions by
reconstructing its decisions. Justification can be a posteriori rationalization of moral actions.
13
In many areas of science, models use a “functional stratification” in which the explanatory power
of a model operates at one level only.
Investigation at one level “is black-boxing” everything that happens at lower levels. The lower
levels are reduced to a functional specification. See M. Strevens for a comprehensive discussion on
“boxing” (black-boxing and gray-boxing) in science (2008).
132 D. Howard and I. Muntean

7.4 Moral Dispositional Functionalism and Artificial


Moral Agents

To produce a more substantial position against no-go positions, the present paper
exposes the “moral dispositional functionalism,” which is similar to a “semantic
naturalism” in ethics.14 Metaphysically, this model is premised on the claim that
moral properties are determined by nothing else than natural properties, and that
such a determination can be known in principle by applying semantic norms to
moral concepts. Such a determination may be very problematic in practice, although
in principle it is possible. Moral truths cannot be known only on conceptual grounds,
but are based on empirical knowledge and on semantic operations. A form of moral
cognitivism is needed in most of the AAMA models. The cognitivist’s terms such
as morally “right,” “wrong,” “bad,” “immoral,” etc. come with their own semantics:
they are normative properties in the world, the same way the world possesses a set
of natural properties.
The naïve semantic naturalist would state that knowledge of moral properties
can be gained like perception, by investigating a collection of natural properties.
Knowledge of natural facts plus experience and semantic knowledge about moral
concepts are needed. This cannot be correct: think of sociopaths who have no
semantics for moral concepts. Those who make blatantly false moral judgments
or misuse moral language may have a perfect knowledge of natural facts. In
this project, moral data is created by having a moral semantics in mind: moral
“mistakes” cannot be established on purely conceptual grounds. Moral agents are
not agents who need to add more knowledge about the natural world than non-moral
agents have. There is something special about moral agents that differentiates them
from non-moral agents. What is then the minimal set of properties that makes some
agents, moral?
Moral functionalism as a more sophisticated version of semantic naturalism and
moral cognitivism is germane to this project. Two versions are shortly surveyed
here: the “analytical moral functionalism” (Jackson and Pettit 1995; Jackson 1998;
Zangwill 2000; Horgan and Timmons 2009) and the “rational moral functional-
ism”15 (Danielson 1992; Danielson 1998a). The main thrust of this section is to
adapt moral functionalism to an agent-centric ethics, and specifically to virtue
ethics.

14
This paragraph is inspired by some debates for moral realism and moral naturalism (Railton
1986; Brink 1989).
15
Danielson talks about moral functionalism, but because the central role rationality plays in this
argument, we are using here the name designate his view as “rational moral functionalism.”
7 Artificial Moral Cognition: Moral Functionalism and Autonomous Moral Agency 133

7.4.1 Two Types of Moral Functionalism

The “analytical moral functionalism” of Jackson and Pettit assumes both cogni-
tivism and naturalism (called sometimes “moral descriptivism”) to devise a form
of supervenience of the evaluative on the descriptive16 (Jackson and Pettit 1995;
Jackson 1998, Chaps. 5 and 6). The ethical properties supervene on descriptive
properties, because the “ethical way” things are supervenes on the “natural way”
things are (Jackson 1998, 119). There is not properly speaking a reduction here, only
a supervenience relation based on a bunch of other ethical terms. For example, the
concept of ‘being fair’ picks out a descriptive property, but only through its place in
a theory that contains other moral terms.17 Henceforth, moral functionalism needs
a “folk morality” about moral rightness, wrongness, goodness etc. The theory of
morality that defines these terms needs to satisfy, or “nearly satisfy” the semantics of
the folk morality, as this latter one settles which descriptive properties are selecting
which ethical properties, in the same way in which folk psychology “picks out”
the relation between mental states and physical states. “The identifications of the
ethical properties should all be read as accounts, not of rightness simpliciter, but of
rightness for this, that, or the other moral community, where what defines a moral
community is that it is a group of people who would converge on a single mature
folk morality starting from current folk morality” (Jackson 1998, 137).
Another way of connecting functionalism to AAMA is by dismissing the “no-go”
internalism marshalled against machine ethics. Z. Robinson, C. Maley, and G.
Piccinini offered recently a definition of functionalism (2015). Several properties
of an human agent H, although present in most of us, are mere by-products, and
not relevant to the operational definition of what agent H is. Take for example the
property of consciousness (C). The moral functionalist’s intuition is that if a moral
action A performed by H could have been performed by a non-conscious being
X (a computational device or an organism, other than human) that is functionally
equivalent to H, then C is a by-product or an evolutionary accident for morality.
A similar argument can be run for personhood, free will, empathy, etc., especially
against the “no-go” internalism. In this sort of moral functionalism, the moral
property that one might attribute to a being B having property C could have been
achieved by another being B* with no C, but functionally equivalent to B. The
process of becoming an agent, of developing moral competency and expertise, can
be instantiated in various types of agents, such that the moral agency supervenes
upon the natural properties of H or X. This does not imply any form of rationality
or personhood, and it is quietist about an agent’s ability to justify its actions.

16
Here is a definition of this type of supervenience: “for any two worlds that are descriptively
exactly alike in other respects, in the attitudes that people have, in the effects to which actions lead,
and so on, if x in one is descriptively exactly like y in the other, then x and y are exactly alike in
moral respects” (Jackson and Pettit 1995, 22).
17
Moral properties are role-filling properties and, in analytical moral functionalism, moral terms
are similar to theoretical terms in D. Lewis’ sense (Lewis 1983).
134 D. Howard and I. Muntean

On Danielson’s view, morality is justified by rationality in the sense that rational


agents are able to solve moral problems (e.g., the Prisoner’s Dilemma) that amoral
agents cannot solve. Through the concept of substantive rationality of moral agents,
Danielson extends the class of moral agents to artificial agents when they are able
to “constrain their own actions for the sake of benefits shared with others” (1992,
196). On Danielson’s view, artificial morality has to be functionalist, if rationality
is the goal of achieving morality.18
One immediate problem with “analytical moral functionalism” is its analyticity.
It is hard to see how moral claims can be decided by conceptual analysis and
how immoral actions are merely due to conceptual confusion (Zangwill 2000). The
appeal to the doctrine of folk morality in analytic functionalism opens a pathway
to relativism: what is morally right is relative to a set of folk morality “platitudes”
that a community embraces. It is easy to see how some folk morality frameworks
are seriously in error, but not merely due to analytic reasons.19 This also can create
a problem for Jackson’s functionalism: it may land in a bona fide moral relativism.
On the other hand, Danielson’s mixture of moral functionalism and rationality
is restrictive: in fact, most moral actions are not immediately rational. Imposing
rationality on a moral agent can be too much to ask and may force the discussion into
the corner of rule-following agency, or even a restricted form of consequentialism.

7.5 Dispositions, Virtue Ethics, and Moral Cognition

Moral functionalism adopted here emphasizes the role of the functional and
behavioral nature of the moral agent: its decision, its output state, are functional
in nature, individuated by its dependence on the input, the previous output (a form
of “moral memory”) and other, current, or previous, moral states.20 As this paper
endorses a supervenience of the moral properties on the natural properties and a

18
This artificial morality is responsive. Moral agents “must limit the class of agents with which
they share cooperative benefits” to other rational agents. The adaptive moral agents in Danielson
can include corporations, artificial agents, institutions, as far as they display a moralized rational
interaction. He also proves that moral constraints (e.g., “keeping a promise”) are rational and can
be implemented in artificial agents.
19
Another way to put it is to compare egoism with utilitarianism, which are both rational, and
to conclude that rationality alone cannot be a complete guide to morality. This is, in Sidgwick’s
words, the “dualism of practical reason” and constitutes for some the “profoundest problem of
ethics” that affects especially utilitarianism (Sidgwick 1930, 386). Jackson talks about naïve and
mature naïve folk morality: the latter stage is obtained after one weeds out inconsistencies and
counterintuitive parts of the former (Jackson 1998, 133).
20
Moral functionalism per se is compatible with the main directions in ethics. There is a rule-
based functionalism, a utilitarian functionalism, a “rights” functionalism. For example, when one
postulates a strong deterministic connection between the input and the output, with no exceptions,
then a “rule functionalism” is at work.
7 Artificial Moral Cognition: Moral Functionalism and Autonomous Moral Agency 135

moral semantics grounded on folk morality, one can endorse at least provisionally
a form of dispositionalism in which the moral agent develops the disposition of
replicating the semantics of a moral community by learning its folk morality.
For this agent-centric, minimal model of normative agents, virtue ethics becomes
a natural theoretical framework that can drive the model.21 Here, the virtue is a
dispositional trait that is nourished through a process of learning. In the present
dispositionalist account, the experience of the moral agent plays a central role: in
time, the agent is trained to classify and categorize moral cases based on a set of
reference answers. The dispositional set is interpreted here as a functional relation
between input data and the output, typically a morally qualified action. One way
is to reproduce the behavior of human agents (“the best we have” or the most
mature “folk morality,” à la Jackson) that can be quantified and operationalized
in numerical variables. Pace Danielson, there no need to impose rationality over
agents in moral functionalism. This does not entail that the agent does not develop
in time moral virtues that can be considered, at least a posteriori, rational: prudence,
cooperation, etc.
In this form of moral functionalism, associations between known classifications
help the moral agent with classifying new cases. Moral cognition is then similar
to cognition at large. Learning by prototypes is more important than applying
rules, syntax, or maxims of moral reasoning. This position, called “association-
based” by Paul Churchland, can be contrasted to the rule-based, or reasons-based
accounts of A. Clark (Churchland 1996; Clark 2000, 2001). What an agent can
do is minimally and sufficiently replicate the moral behavior of a given number of
prototypical agents, typically humans. Fundamentally, the process of learning from
previous experiences and selectively choosing the best moral behavior is a sufficient
condition for the implementation of an AAMA.
This agent-based functionalism is based on a central characteristic of cognitive
systems: the ability to learn and adapt to the environment. Both are core features of
most definitions of general intelligence. Instead of following rules or performing
calculations, learning by prototypes and by associations and then categorizing
current situations based on similarity with learned cases are central features of moral
agents. There is a set of analogies this paper is grounded on, between learning from
perceptual data and learning from moral data. If moral agents learn from “moral
data,” what is exactly moral adaptation? If adaptation is taken as a component of
moral cognition, we need to define a moral fitness function, as well as a reward
system for moral agents. Here, the analogy with evolution turns out as being very
useful.

21
We assume here that models can be driven by theories, and not that models can be derived
from theories, a hypothesis advanced in the literature about the autonomy or independence relation
among models and theories. See the relevant debate in (Bueno et al. 2002; Suárez and Cartwright
2008).
136 D. Howard and I. Muntean

7.6 The Analogies of the AAMA Model

Moral dispositional functionalism is premised on the operation of “black-boxing”


some subsystems entirely, which are presumably parts of the human moral agency.
A given function can be instantiated by multiple submodules or causal mechanisms.
For a given moral function, there is a (large) set of mechanisms that may instantiate
it—some of these mechanisms are instantiated in humans, some are not. What is
needed here is a “selective” version of biomimetics (Sørensen 2004). Implementing
the “association-based” agent suggested by Churchland in machine ethics links the
AAMA ethics to the way in which intelligent systems learn and discover, called
“machine learning.” In both areas one can shift from “rule following” computation,
to discovering the “best solution so far” by a search procedure based on learning,
exploring, and adapting.
This section expands the idea of an analogy between four types of agency: human
moral cognition and human (non-moral) cognition on the one hand, and artificial
moral cognition and machine learning (AI cognition) on the other hand.22 Learning,
developing, and perfecting moral competency are linked to the advantages of neural
networks; finding the best solution (even if it is not unique) to a given situation
is approached by evolutionary computation. It is assumed here that, when used in
the right way, neural networks and evolutionary computation are very effective in
discovering patterns in data. There is also a type of moral creativity present in such
an AAMA: it optimizes its own knowledge, not by following rules or replicating
undiscriminatingly the human moral behavior. Rules are derivable and reducible to
strategies of searching: in other words, they are not fundamental, as they emerge
in the process of optimization. Their presence in data is just an accidental, albeit
desirable, feature of moral datasets.
Let us vet a template of the analogies used in this AAMA model. The human
agent can be divided roughly and simplistically in two “sections” (or faculties, in
the Aristotelian tradition): the non-moral agent that reasons about propositions about
natural properties and physical objects, that are roughly “true” or “false,” and the
moral agent that includes the “moral” (be it norms, values, etc., which have moral
qualifications). The non-moral agent develops a set of cognitive “skills” about facts,
by learning, improving, or by trial-and-error method, etc. The human moral agent
develops a set of virtues, or “dispositional traits,” which are abilities to operate
within the moral realm. Similarly, the functions of the artificial agent are separated
in the “expert system” and the “AAMA section.” Then, one can conjecture four
analogies that guide the deployment of this AAMA:
AN1 (the “skill model” of virtue ethics): exercising a virtue (by the human moral agent) is
“of the same kind” as exercising a practical skill (by a human non-moral agent);

22
The analogy between the moral development of a human agent and the general process of
learning, developing skills, efficient finding of the optimal solution, or choosing between different
alternatives, is a convoluted topic: a deep analysis of each corresponding analogies is left for
another occasion.
7 Artificial Moral Cognition: Moral Functionalism and Autonomous Moral Agency 137

AN2 (“Artificial General Intelligence”): “machine learning” of the artificial agent can be
implemented by an analogy with the acquisition of skills by the human (non-moral) agent;
AN3 (moral biomimetics) the moral cognition of an AAMA can be implemented by the
way of an analogy with human moral virtues.
Therefore, AN4 : the moral cognition of an AAMA can be modeled (built) in the way in
which we have implemented “machine learning” in expert systems.

Witness that these are incomplete and idealized analogies, and they are not
identity-conducive. The AAMA retains a form of autonomy “by design”: it is built
by an analogy with its non-moral counterpart, as well as the human agent. These
analogies have a central epistemic status, but no direct ontological or metaphysical
bearings. AN2 and AN3 are biomimetic in their nature. This paper does not insist on
reasons to adopt AN1 and AN2 , but focuses on a couple of arguments for AN3 and
especially for AN4 . As these analogies are not identities, for each of them there is a
corresponding set of dis-analogies, equally important in building the AAMA.
First, analogy AN1 is probably a central topic in ethics: is moral cognition
analogous in any substantive way to cognition in general? What is germane here
from the perspective of virtue ethics is the “skill model” of virtues, used as a starting
point of the analogical argument for AN4 . The “skill model,” discussed in Aristotle,
has been endorsed by several authors (McDowell 1979; Annas 2011; Russell and
Miller 2015). Starting with the thesis that virtues are like skills, one can infer that
there is moral expertise, both in respect of judgments (“follow the reasons”), which
is non-propositional and not based on rule-following, and in respect of action (moral
“know-how”). In this skill model, there is moral learning, competence and expertise.
The practical reasoning of a virtuous person is very similar in important ways to the
reasoning of a person who possesses a practical skill (Annas 2011). Annas relates
virtues to skills by “the need to learn and the drive to aspire.”23 There is an intimate
relation between the process of acquiring a moral virtue and the definition of what
morality is.
Second, analogy AN2 is a conjecture used in the discipline of General Intel-
ligence Machine, known as the “grand AI dream”: to produce a machine that
possesses intelligence and cognition. The grand goal of this area remains unrealized
and General Intelligence is hard to define, but one central aspect of it is learning,
especially autonomous, active, and incremental learning (Goertzel and Pennachin
2007). But more specific tasks such as learning from data, (including categorization,
classification, problem solving, etc.) which characterizes the human mind, can be
implemented in machines based on AN2 . This is one of the oldest paradigms of
computation, speculated by Turing in two of his seminal papers (1950, 1992). One
can adopt different models of learning in humans and consequently different ways in
which a machine learns, based on the analogy with human cognition: computational,
connectionism, or the dynamical systems theory.

23
Both skills and virtues are practical abilities, and, in line with Aristotle’s ethics, “exercising a
virtue involves practical reasoning of a kind that can illuminatingly be compared to the kind of
reasoning we find in someone exercising a practical skill” (Annas 2011, 16).
138 D. Howard and I. Muntean

Third, the proposed analogy AN3 links the dispositional traits in the human moral
agent to the learning capabilities of the AAMA. For example, as a partial analogy,
AN3 can pick some features of the human moral agent such as the ability for
“behavioral reading of others,” the ability to follow ethical principles, or the ability
to calculate the moral consequences of actions, respect the rights of human beings,
the principles of justice and fairness, the laws (legalism), etc. The present model
focuses on the ability to find patterns in moral data about human behavior which
is a subspecies of “behavioral reading.” Another feature of the present approach is
the local nature of moral competence. It is not the aim of this AAMA to be global,
morally: the AAMAs are not trained to solve all moral dilemmas or make all moral
decisions. They are less scalable than we humans. The population of AAMAs is
trained for domain-specific moral responsibilities. Thus, ethics programming for
a patient-care robot does not include all the kinds of competencies that would be
required in a self-driving car or an autonomous weapon. This has two advantages:
(1) It makes the implementation task easier and (2) it makes the present approach
more adaptable to a specific situation. In ethics, generalists are at great pains to
show how moral principles unify a vast class of behaviors and roles the agent has to
play, and this is not the case with the present approach, which is more particularist
in nature (Dancy 2006).
Again, there are corresponding dissimilarities that accompany all the analogies
in Fig. 7.1. For example, the evolutionary computation component of the AAMA
implementation does not have, arguably, an immediate analogue within the virtues
of human agents. It is part of this project, generally speaking, to exploit the right
analogies and right dissimilarities between the human agent and the AAMA. This
is, as a long term aim of such a project, a possible route to understanding better
our own morality. For example, take the question about selection and adaptation in
the process of learning: is selection of the best moral behavior at least partially
similar to evolutionary selection in which “pieces” of previous behaviors are
used to build future behaviors? Moreover, in the line of the dis-analogies that
are at the heart of this project, a fundamental question is: how can we combine

The AMA The human agent Y X Y is implemented by analogy with X

Y X Y is “of the same kind” as X


Artificial moral Developing of
AN3 Y X Y is modeled by analogy with X
cognition Virtues

The “moral” Y X Y is weakly analogous to X


AN4 AN1
The “factual” Y X Y is stronger(ish) analogous to X

“Machine Acquisition of
AN2
learning” skills

Fig. 7.1 The analogical AAMA model


7 Artificial Moral Cognition: Moral Functionalism and Autonomous Moral Agency 139

evolution with learning, in social science, or the humanities (Brenner 1998; Dawid
2012)? Evolutionary computation would use random variation and selection to
create adaptation. Learning uses reinforcement from past experience to direct future
behaviors or to predict new patterns in unknown data. Are they related, somehow?
No matter what the answers to these foundational questions are, the present project
is instilled with the idea that both evolution and learning are part and parcel of
the conceptual foundations of AAMA. Although this is beyond the scope of this
paper, new research shows in what sense evolution and learning are linked in nature
(Watson and Szathmáry 2016).
As an analogical model, a number of hypotheses about the AAMA model are
crucial to the parallel between human and artificial moral agents.24 First, here is a
working hypothesis against the “no-go” positions:
H-1 COMPUTATIONAL ETHICS : The moral decision-making process is funda-
mentally computable: it can be either simulated, implemented, or extended by
computer algorithms.
To explain the choice for H-1, one can assume that all processes happening in the
brain are in principle computable. Another question is how much we need to know
about these processes in order to replicate them in the AAMA. Existing foundational
attempts to resist the pre-theoretical no-go positions are mostly inspired by recent
advancements in neuroscience and neuroethics (Floridi and Sanders 2004; Litt et al.
2008; Wallach et al. 2010; Sun 2013). When we created the “artificial version of
X” (be X here: flying, terrestrial or aquatic locomotion, curing diseases, perceiving,
etc.), we drew only partial inspiration from nature, and devised different analogies
such as AN2 or AN3 . As the only known moral agents are humans, the only entity
to be emulated in the AAMA model is the human agent. This does not entail that
the best implementation of AAMAs is isomorphic to our humankind ethics. The
analogies used here are incomplete and imperfect, akin to scientific models which
represent the world through a relation of similarity. Partial similarity is epistemically
fruitful, as it explains the success of some models. There is no need to “bottom-
down” the AAMA implementation, to the level of neural mechanisms and brain
processes of our moral reasoning. In a similar way, discovering new invariants of
nature by an artificial procedure does not need to follow the complicated cognitive
processes of the human brain: it may include machine learning, evolutionary
thinking, or something completely different (Nickles 2009; Schmidt and Lipson
2009; Muntean 2014). Ditto about computers creating music, paining, designing,
etc. The secret of programming the AAMA, it is conjectured here, consists in
walking a fine line between what we discover (take from human moral agency),
and what we invent. We equally exploit the analogies and the aforementioned dis-
analogies, or we ignore the similarities and dis-similarities, as needed. The AAMA

24
This section reiterates the lines of argument presented by D. Howard and I. Muntean in a previous
work (2014).
140 D. Howard and I. Muntean

is created at the interface between the “natural” and the “artificial” and, as a partial
model (witness, again, the similarities with a scientific model), the AAMA model is
premised on other several hypotheses:
H-2 AGENT -CENTRIC ETHICS : The agent-centric approach to the ethics of
AAMAs is suitable and sometimes even desirable, when compared to action-
centric models.
H-3 CASE -BASED ETHICS : The case-based approach to ethics is preferable to
rule-based approaches to the ethics of AAMAs.
H-4 “SOFT COMPUTATIONAL ” ETHICS : “Soft computation” is suitable, and
sometimes even better than traditional approaches in implementing the AAMAs.
The present model is based on neural networks optimized with evolutionary
computation.25
Whereas H-4 is a matter of choice of the most suitable computational tool, H-2
and H-3 are related directly to the ethical framework adopted. One can think of
H-2 and H-3 as running somehow against the existing AAMA modeling. For the
majority of those who are on board with H-1, hypotheses H-2 and H-3 go too
far. Standard computational tools and deontology and/or consequentialism (or a
combination of them) that represents the action, rather than the agent, are widely
regarded as enough for AAMAs.26
If one believes that ethics consists of simply following a fixed set of rules,
with no or little autonomy or creativity, then machine ethics looks much easier to
implement. They can be simply Asimov’s (1942) three rules of robotics, or much
more complicated set of rules. (1942) They are conditions implemented easily in
conventional computing (as Horn clauses, Markov decision chains, etc.) through
conventional algorithms. For each action, the machine represents the space of all
possible actions and in this space eliminates those which violate a set of rules. The
autonomous lethal weapon discussed by Arkin implements the “rules of war” and
“rules of engagement” (2009) (2009; 2013). Moral principles are simply conditions
imposed on the structure of the algorithm. The set of moral rules are preprogrammed
into the machine, which follows them blindly. The “moral robot” is always ready to
act the way it was programmed: one nevertheless may question whether autonomy,
learning and independent development are present in such AAMAs.
This AAMA follows nevertheless a different path. As an immediate consequence
of AN1 , H-2, and H-3, what is employed here is a form of “virtue ethics,” a well-
established agent-centered theory in ethics. The most basic idea of virtue ethics, in
some of its recent incarnations, is that the moral excellence of the agent is more
fundamental than moral obligation, or the rightfulness of actions (Nussbaum 1986;
Crisp and Slote 1997; Hursthouse 1999; Swanton 2003). But this model differs
significantly from mainstream virtue ethics: here, morality is about acquiring a

25
For the hard/soft distinction in computing, see Sect. 7.8 below.
26
See other approaches which partially adopt some of these hypotheses: (Gips 1995; Danielson
1998b; Coleman 2001; Guarini 2006).
7 Artificial Moral Cognition: Moral Functionalism and Autonomous Moral Agency 141

number of remarkable virtues, not necessarily the unity of virtues, but plurality of
“dispositional traits.” Here the agent that performs the action is the focal point, with
its autonomy and moral cognition. In contrast to deontology or consequentialism,
agents are not mere performers-of-actions, but bearers of moral traits. What matters
is the consistency of conduct, where conduct is understood as a feature of a set of
actions of the same agent.
In psychology, some use the term “personality traits” as long-term dispositions
that produce behavior. They are behavioral dispositions of the morally mature, adult,
human moral agent. This is one reading of Aristotle, endorsed for example by
J. Doris: virtue is a “disposition” that in most cases has the form of a subjunctive
conditional statement: “if a person possesses a virtue, she will exhibit virtue-relevant
behavior in a given virtue-relevant eliciting condition with some markedly above
chance probability p” (Doris 1998, 509). Several authors have explored conceptually
the agent-based AAMA in the context of virtue ethics (Gips 1995; DeMoss 1998;
Coleman 2001; Moor 2006; Allen and Wallach 2009; Tonkens 2012). There are
probably only a couple of concrete implementations of AAMA inspired, or at least
in the vicinity, of H-2, and H-3; most notably M. Guarini’s recent work (2006,
2011, 2012 2013a, 2013b). R. Arkin and probably the majority of authors, chose the
principle-based and act-centric architecture model. Arkin’s reason to bypass virtue
ethics is because “it does not lend itself well by definition to a model based on a
strict ethical code” (Arkin 2009, 99).
The AAMA is not human and needs not to be rational in the sense of always
acting under a rule: it just produces actions that can be commensurable with human
moral actions. The philosopher of AAMAs is not going to be held hostage to
the metaphysical and philosophical conundrums of the ethical theories or to the
knowledge, or the lack thereof, about the neural mechanisms of moral decision
making. Virtue ethics for AAMAs is just a partial model in a partial analogy with
human virtue ethics: only some concepts and assumptions of a given virtue ethics
theory are reframed and used here. A form of pluralism about theories in ethics
is preferable in building an AAMA: the programmer accepts, pragmatically, the
advantages and disadvantages of different ethical theories for AAMAs: rules or
calculations of consequences can be incorporated as constraints on the training and
function of the AAMAs in the evolutionary computation component of the model
(see below, Sect. 7.8).

7.7 Patterns in Moral Data

Ethical decisions are taken when “information is unclear, incomplete, confusing,


and even false, where the possible results of an action cannot be predicted with
any significant degree of certainty, and where conflicting values : : : inform the
decision-making process” (Wallach et al. 2010, 457). Moral cognition is in fact
more complicated than it might appear: complexity, lack of certainty, noise, error,
ambivalence, and fuzziness are all features of the content of moral cognition.
142 D. Howard and I. Muntean

Nevertheless, in the line of H-1, and of H-4, it is assumed in this paper that
numerical data can represent adequately moral behavior as variables (numerical,
nominal, categorical, etc.) and morality can be quantified by moral behavior. The
moral decision making process is ultimately a computational process, similar to
the perception of complicated patterns; playing games, and discovering strategies
in games; discovering, progressing, and creating new scientific theories, new
technologies, etc. No matter how complex, they are all natural processes which can
be captured, more or less accurately, by a set of variables.
Second, it is assumed that data representing moral behavior is regular enough
and that it exhibits “patterns.” For Kahneman, the existence of regularities in
the environment and the feedback we receive in the process of learning are two
conditions of the acquisition of skills (Kahneman 2011, chap. 22). Based on
AN1 , feedback and practice are elements of the skill of learning. Moral data lies
somewhere between randomness and pure order. On one extreme, there is only
spurious or trivial patterns in white noise or in a pure random sequence (Calude and
Longo 2016). On the other extreme, data consists in one pattern only for a recursive
analytic function, or a series, for example. The moral data is a collection of unknown
and possibly very complex patterns. Moral behavior is then an unknown function,
with an unknown number of variables and constraints. Although there is always
missing information about the variables of moral behavior, the advanced AAMA
can pick relevant patterns of dependency among relevant variables.

7.7.1 Reasons to Choose H-4

In the line of AN4 , one can represent moral learning as solving a classification
problem in a “search space”: mainly, this entails that the agent “separates” the
search space in subspaces, each belonging to moral categories. But how fragmented
is this moral space?27 This section focuses on separability of moral categories.
Even without a rigorous proof, one can see easily that finding interesting moral
behavior as a classification problem is fundamentally not “linearly separable.” Even
for simple cases, the moral search space cannot be divided by hyperplanes into
massive regions of “right” and “wrong,” but only into local areas of “right” in larger
areas of “wrong,” separated by a multitude of hyperplanes or other geometrical
subspaces which are not linear.28
As a first argument in support of H-4, consider the fact that, even for two vari-
ables, the moral behavior space is “fragmented,” i.e. non-separable. Functionally,
moral behavior is closer to the exclusive disjunction operator (a non-separable

27
We use here the intuition of fragmented space in relation to the mathematical concept of
separability.
28
This whole approach can be integrated in the framework of “conceptual space models” advanced
by P. Gärdenfors and collaborators (Gärdenfors 2000; Bueno 2015; Zenker and Gärdenfors 2015).
7 Artificial Moral Cognition: Moral Functionalism and Autonomous Moral Agency 143

Table 7.1 The simple case A V O


of a moral judgment as
F F F
exclusive disjunction
T F T
F T T
T T F

function), than to the disjunction or the conditional (separable). Assume that a


company makes a profit with a product, but as a necessary consequence it damages
the environment. One can code the action of the company by a variable that has two
possible values (T, true or F, false), as the truth value of this statement: AD“The
company makes a profit AND harms the environment.” The second variable codifies
very simplistically the “system of values”: VD“The environment has an intrinsic
value in itself, and we do not want to harm it.” The output is also a binary variable
that codes whether the action of the company is desirable or not: OD“The action of
the company is desirable (optimal).” When one tries to codify the moral behavior of
the company in this very simple model, one ends up with a truth table like Table 7.1.
The first two rows correspond to the situation in which the environment does not
carry any value and the desirable action for the company is pure profit. When the
environment carries a value, the moral outcome trumps the profit, and the desirable
action is O the negation of A (see the last two rows in Table 7.1).
Then, even in this case, the exclusive disjunction is not separable and a simple
neural network (with no hidden layers) is not able to reproduce it.29 At least one
hidden layer is needed: for more complex moral data, more hidden layers are
needed, hence some process of optimization of the architecture of the network
(Fig. 7.2).
In the search space of solutions, there are no hyperplanes that separate the “right”
from the “wrong,” but more or less localized subspaces (roughly, n-dimensional
objects) of “right.” When the output is not a binary variable, the structure of the
search space is even more complex. As at least one hidden layer of neurons is
needed, the process of learning will produce heuristic results, and a potentially
infinite number of neural architectures that learn the moral pattern from data. This is
a sufficient reason to believe that the very process of finding the optimal architecture
of the network should be carried by an algorithm, and not by a human.
This AAMA is built on a pattern recognition structure: the “evolving artificial
neural networks” approach, employing a population of neural networks (NNs) and
evolutionary computation (EC), called here NN C EC. It is trite to say that moral
data set are complex: but complexity here means a large number of degrees of
patterns, not degrees of randomness (Shalizi and Crutchfield 2001). Patterns, unlike
rules or principles, include non-linearity, emergence, errors, noise, or irrelevant data.

29
A single-layer network is composed of one input layer of input units and one layer of output
units such that all output units are directly connected to the input units. A multi-layer network
has hidden layers that interface the input units and the output units such that output units are not
directly connected to input units, but only to hidden units.
144 D. Howard and I. Muntean

Fig. 7.2 A standard neural network with one hidden layer

More or less metaphorically, one can talk about a complex pattern as a superposition
and entanglement of symmetries and regularities (Holland 1975; Ladyman et al.
2012; Mitchell 2012). We assume that for a class of data containing moral variables,
there are moral patterns and they can be learned.
The complexity of moral data makes us think of the virtuous moral agent as
an active learner, highly adaptive and highly responsive. The ideal AAMA is then
characterized by robustness to perturbation, noise, missing data, and other outer
influences. It can detect the right pattern in the data in the right amount of time
and using the right amount of resources. Further conceptual work is needed to
differentiate between patterns in data, regularities and exceptionless rules, similar
to what “laws of nature” are in natural sciences. The moral data is not completely
random, but it not absolutely ordered.

7.7.2 Moral Rules Versus Moral Patterns

A data set may or may not contain patterns with exceptionless rules. In general,
given the complexity of moral data patterns, moral behavior cannot be reduced to
rules, for all practical purposes. In the example used here, the set of “rules” or princi-
ples are complicated, conditionalized, constrained, and too hard to be expressed in
7 Artificial Moral Cognition: Moral Functionalism and Autonomous Moral Agency 145

computational algorithms. For some cases, the AAMA may discover a posteriori
rules, but they are not assumed at the beginning.30 Unlike Guarini’s argument
(2006; 2012), this AAMA model endorses both functionalism and particularism:
principles are not impossible or useless to express, but they do not play the central
role in the design of this AAMA. The focus is on the moral development and moral
expertise, which the successful AAMA is susceptible to achieve. The moral decision
is ultimately like a classification problem: given a set of facts and intentions of
the agent, an action receives a moral quantifier. The process of classification is the
result of a previous learning process in which the AAMA is instructed on similar
classification problems, typically simpler and paradigmatic.
The well-trained AAMA will be able to find the right action more quickly and
more efficiently than the novice AAMA, in the same way in which an expert
observer is able to classify quickly an object of perception. The process itself is
close to learning from data, but can be better described as self-building, or possibly
self-discovering, of an increasingly optimal pattern from data. The decision making
by an AAMA is represented as a search procedure in a space of possible moral
actions. One can imagine a moral conceptual space with a dimensionality related to
its complexity and with a certain degree of fragmentation related to the difficulty
of searching maxima or minima (Zenker and Gärdenfors 2015). The moral decision
process evolves in a space of possible solutions and stops when a solution good
enough is found. The output is a moral quantifier, whereas the input is a collection of
facts, conditions, actions and their consequences. The learning component warrants
that the result depends on a level of moral expertise that the machine had acquired in
the past. The current moral competence of an AAMA is context sensitive, and varies
among different communities of AAMAs or within the cases at hand. This proposal
assumes that a moral competence brings in some unity of behavior, but, more
importantly, flexibility, adaptation, and ultimately the ability of reclassification.

7.8 Neural Networks (NN), Evolutionary Computation (EC),


and Moral Behavioral-Reading

A central assumption of the present AAMA model is hypothesis H-4: the cross-
fertilization among neural networks, and evolutionary computation, within the “soft
computing” (aka “computational intelligence”) paradigm, usually contrasted to

30
The approach that uses partially H-2 and explicitly H-3 is (Guarini 2006). Guarini trained a
number of artificial neural networks on a set of problems about “X killed Y in this and this
circumstances” and managed to infer (predict) moral behaviors for another set of test cases.
Guarini’s conclusion runs somehow against moral particularism, because some type of moral
principles is needed (including some “contributory principles”), but it also shows that particularism
is stronger than it seems. The simple recursive artificial neural network makes decisions without
exceptionless moral principles, although it needs some “contributory” principles, which play a
central role, Guarini argues, in moral reclassification.
146 D. Howard and I. Muntean

“hard computing” (Adeli and Hung 1994; Zadeh 1994; Mitra et al. 2011; Adeli
and Siddique 2013). Soft computing starts with the premise that certainty or rigor
are not possible or even not desirable in some cases, or, alternatively, that, if they
are possible or desirable, they come with a high cost. “Soft computing” refers to a
large set of computational techniques that are qualitatively imprecise and ultimately
not rigorous. In soft computing, one implements a form of approximate reasoning
more tolerant to vagueness, uncertainty and partial truth. One less obvious reason
to adopt H-4 and to partially implement this AAMA via “soft computing” is to
gain a certain degree of autonomy. The lack of autonomy is associated here to
any necessary, pre-programmed principles about the nature of the moral decision
making (rules, maxims, codes, calculation formula). On the contrary, this AAMA
is loosely premised only on the knowledge we have about the Neural Networks
and Evolutionary Computation, and not on what we know about the nature of moral
cognition of humans. The knowledge of normative agency would be, ideally, a result
of the present implementation, not its assumption.
Hard computing is not a good match for machine ethics. Although computable,
computational ethics does not belong to the domain of certainty and rigor, as there
is no complete knowledge of moral or ethical content. As Allen et al. suggest,
“Von Neumann machines and neural networks, genetic and learning algorithms,
rule and natural language parsers, virtual machines and embodied robots, affective
computing, and standards for facilitating cooperative behavior between dissimilar
systems may all be enlisted in tackling aspects of the challenge entailed in designing
an effective [AMA]” (Allen et al. 2005, 153). The last part of AN4 is that there is at
least one machine learning procedure that can be extended to moral learning.

7.8.1 The NN C EC Design

Neural networks are frequently used in the process of learning and recognition of
patterns from data, be it perception or otherwise (T. M. Mitchell 1997). Can they be
trained to learn moral decisions patterns, and to read human moral behavior from
data? As J. Gips has hinted, probably the right way of developing an “ethical robot
is to confront it with a stream of different situations and train it as to the right
actions to take” (Gips 1995). This is strikingly analogous to what neural networks
can do, when are trained to recognize faces, voices, and other patterns in data.
By the mechanism of learning, the well-trained network is supposed to generalize
from cases presented during training to a pattern, or a function. These generalized
responses are provisionally considered the “robo-virtues” of the AAMAs.
The H-4 hypothesis implies a hybridization between neural networks, NN,
which have an immediate correlate in the brain, and the mathematical hypothesis
of evolutionary computation (EC) as optimization method, with no immediate
counterpart in the brain. The computational reason to add EC to the NN model is
related to the limited abilities of one single neural network to find a global optimal
solution. Philosophers and cognitive scientists have discussed the advantages and
7 Artificial Moral Cognition: Moral Functionalism and Autonomous Moral Agency 147

disadvantages of assimilating learning and the agent-based approach to ethics (Allen


and Wallach 2009; Abney et al. 2011; Tonkens 2012). The conjecture in this
paper is that some inconveniences of NN can be sidestepped in the NN C EC
schema for some cases of moral data. When working with evolutionary techniques,
a full characterization of the network includes: learning rules, parameters of the
network, weights, topologies, functions, and architectures. The EC needs to code
some of these characteristics of NNs as chromosomes and to compute the optimal
combination at a given iteration (Koza 1992; Tomassini 1995; De Jong 2006;
Affenzeller 2009).
The option is to take NN as fundamental and EC as a derivative process that
supports a population of NNs. The EC finds the best individual NN from an initial
population of networks, by combining different neural networks from the same
species and by creating new populations, better adapted to the data. Adeli and
Siddique call this the “supportive combination”; here the shortcut NN C EC is
used for this schema, which is arguably a natural choice.31 In the full NN C EC
architecture, EC changes:
(a) the architecture (topology) of NN;
(b) parametric values of NN;
(c) the learning functions of the NNs.
As discussed below, one interpretation of dispositional functionalism relates (a),
(b) and (c) of populations of networks with dispositional traits, or virtues, of human
agents. Structural training (a) can be done constructively, starting from a simple
network, by adding new units and layers; or destructively, by eliminating different
units from an initial, complex and complete network. Or thirdly, by evolutionary
computation, where the optimization occurs at the level of a population of networks
that are all part of a genome that evolves by successive generations of networks.
The parametric training (b) includes weight training, the most important factor
that influences the performance of a network, and bias training. In weight training,
the EC evolves the weight matrices of all the population toward an optimal value,
defined by a fitness function. The neural network defined by its weights and biases
is translated into a chromosome by a “string representation” (Adeli and Siddique
2013, sec. 9.3.1.4–6).
In a more advance interpretation, not yet implemented in the present AAMA,
the NN C EC architecture will incorporate all components (a)-(c). A complex
topology includes hidden connections among the variables as a dispositional trait
to “reading moral behavior” from data. But this puts a heavy load on the EC
component, as the problem of search is much more complex, with a potentially

31
Another supportive schema is to take EC as fundamental and let a NN generate the initial
population of the EC (the EC C NN). The “collaborative combination” is when EC and NN work
in the same time to solve a problem.
148 D. Howard and I. Muntean

infinite dimensionality.32 As reported in the literature, it is hard to evolve the


architecture alone, so an evolution of the architecture and the weights is desirable,
but it becomes very expensive computationally. In this case, the total chromosome
… becomes the function:

… D hW; C; ˆiN!.2NCp/

where W is the weight matrix, C encodes the architecture and ˆ is the node transfer
function. The optimization function in this case needs to include minimization of
errors on training and the minimization of the topology.

7.8.2 The Dispositional Functionalism and the “Lifeboat


Metaphor”

This dispositional functionalist model is designed to replicate the function that maps
the knowledge, the actions and the intentions of a human actor H to a set of moral
quantifiers, typically containing (but not necessarily restricted to) ‘morally right’,
‘morally wrong’ and ‘morally neutral’. This is, from the present perspective, a case
of “behavioral reading” of H by the AAMA. The function implemented in AAMA
is a moral function M from the quadruplet of variables: hX, A, !, §i to ‚:

M W hX; A; !; ‰i ! ‚

where X is the “factual” vector representing the physics, chemistry, and biology of
the situation. A is a collection of actions, ˝ is the vector of consequences, " is the
vector coding the intentions of the actor H, and # is the set of “moral qualifiers.”
The model, primarily designed to encode the “lifeboat metaphor” (see below), can
be generalized to other moral data. The vector X encodes as much as we need to
know about the factual constraints of the problem. Most likely these are “initial
conditions” of the moral behavior in question. The actor H needs to know facts about
the non-moral constraints (physical, chemical, biological, economical) of a moral
action. As a moral decision depends on the possible actions available to the agent,
the vector A encodes these actions the actor can choose from. Vector ˝ captures
the physical consequences of the pair hX, Ai, independent of actor’s intentions ‰,
and gives us an idea of what can happen if H decides to take action a1 , given the
facts x1 . One can see the consequences as known facts to the actor H. In a future
implementation, not discussed here, one can add variables encoding the degree in
which the actor H believes that consequence ! 1 follows from a1 given the facts x1 .

32
Some constraints such as number of hidden layers, simplicity, speed etc., can be added as
conditions of the evolutionary process (Yao 1999).
7 Artificial Moral Cognition: Moral Functionalism and Autonomous Moral Agency 149

Vector X codes the constraints as relations among input data and not as rules.33 The
network differentiates cases that are not possible from cases that are possible, but are
morally neural. By convention, all factually impossible cases are morally neural.34
The vector " , codes “intentions,” and it is probably the most intriguing part of this
implementation. The AAMA is supposed to have some knowledge about the desires,
intentions, or alternatively, about the “mental states,” of the actor H, even about her
own ethical proclivities. The moral behavior data may contain ethical theories, or
norms, or stereotypes and the AAMA will model them.35 Nevertheless, this does
not entail that the AAMA contains in itself beliefs, intentions, conscious states or
that it “mind-reads” the actors H.36 The output of this model is the moral quantifier
#. It includes a “neutral value,” but can be any type of scale values with positive
and negative values, such as: [wrong-neutral-right].

7.8.3 The “Lifeboat Metaphor” as Moral Data

To illustrate the incipient stage of this AAMA implementation we use the lifeboat
metaphor (Hardin 1974; Regan 1983). It includes a large number of scenarios, in
which different variables are added, the constraints are changed, and the conceptual
categories may shift. It is complex enough to illustrate the power of NN and,
arguably, in future implementations, the reason why EC is a central component of
AAMA.
Suppose there is one lifeboat available after a ship sank and the captain (H) of the
ship is the only decision-maker. The lifeboat, with a capacity of c seats, may have
seats left, but there are people in need to be saved, swimming in the water. Taking
too many people onboard will capsize the lifeboat, and everyone dies. This is a case
of equally distributed justice, but with a catastrophic outcome. How does H choose
who comes onboard and who does not? Admitting nobody on the boat means those
onboard survive (with a given chance), but this is not morally permissible if there
are seats available on the lifeboat. Admitting onboard all people from the water
would endanger those onboard the lifeboat. Differentiating among those who can

33
A rule-based system can include the constraints (equalities or inequalities) among the input
vector.
34
The intuition here is that factually impossible situations leave the agent with no options to choose
from.
35
In a more evolved model, vector ‰ can encode frames or mental states of the agent. In the lifeboat
example, it is used to flag out the situations in which the intention is bribery, or personal interest in
saving a specific person, to obtain material possessions of the passengers and disregard their lives,
etc.
36
The intentions of H as an actor in the model, whose intentions are merely input data, are not
mapped on any internal functions of the AAMA: they are just data, with no special status.
150 D. Howard and I. Muntean

stay or forcing people from the boat overboard to admit others from the water may
be considered a heinous act. If H admits people on board after receiving bribery, it
is immoral, but may have desirable consequences.
The model below shows how neural networks can be trained to make decisions
similar to H. Each network has the same input variables: the boat capacity (c), the
number of people already in the boat, the number of people to be saved, some
variables that encode the categories of humans involved (women, men, children,
elderly people), intentions or mental states of the agents in the scenario, and possible
actions (people to be picked from the water, people forced to leave the boat, those
who bribed the captain).
The training set of the lifeboat metaphor example contains several simplified
scenarios. The training data includes a small c and n seats available on the
lifeboat, and with a limited number of categories of people: women, children, adult
males, elderly persons, and with a limited number of attributes: capable of rowing,
reproductive ability, innocence, etc. The number of cases (a total of 105) is divided
into a training set (68 cases) and a test set (38 cases) which, importantly, has the
boat capacity set at c D 4, whereas the training set has c ! 3.
Among the population of networks used (about 40), one scored particularly well
on the test set, with less than 3% error (net44, see below). It was trained with the
Bayesian regularization backpropagation method (trainbr), and it illustrates well an
optimal network able to learn the lifeboat example and to generalize it (Table 7.2).
Even without evolutionary computation (EC), the implementation compared
about ten different networks on the test set, trained with different algorithms.
Importantly, the most competitive networks (including net44) erred on one specific
case from the test set. This shows that selective populations of neural networks
are able to detect inconsistencies in the moral data (between the training and
the test sets). This illustrates the autonomy, the active learning character, and the
dispositional nature of this AAMA model. The inconsistencies in moral data that
create incompatible patterns detectable by the population of NNs as a whole, can be
later corrected or explained, as needed, by the human trainer.

7.8.4 Future Directions of Research: The NEAT


Implementation

In a prospective implementation of this model, the move is from training networks


with fixed topologies to evolving topologies of neural networks and integrate all
aspects (a)-(c) above (see Sect. 7.8.1). In this new model, the process of choosing
the right network from a population is done by evolutionary computation. The plan
is to deploy the package NEAT (NeuroEvolution of Augmenting Topologies), or
7 Artificial Moral Cognition: Moral Functionalism and Autonomous Moral Agency 151

Table 7.2 The result of the best neural network for the “lifeboat example”
Name of Network: net 44

Size of each hidden layer: 20 3

Number of hidden layers: 2

Train function: trainbr

Errors %: 2.6

Confusion matrix: neutral right wrong

neutral 2 1 0

right 0 15 0

wrong 0 0 20

False positive/negative matrix

neutral 0.02 0 1 0.97

right 0 0.062 0.93 1

Wrong 0 0 1 1

some of its successors: Hyper-NEAT (CPPN D Compositional Pattern Producing


Networks), MM-NEAT, and SharpNEAT (Stanley and Miikkulainen 2002; Stanley
et al. 2009; Evins et al. 2014; Richards 2014).37
In the NEAT architecture, a population of m networks with fixed weights and
constant parameters are generated. The EC would evolve this population and decide
which NN is the best, from a given population. The population is divided from the
second generation onwards into species, based on their topology. To this population,
a scenario not included in the training set will be presented and the answer is
interpreted by the “evolver,” who is a human researcher.
This is the general algorithm that a CPPN version of NEAT uses:
1. Create an initial population of NNs with a fixed
topology (no hidden layers and no recursive funct
-ions) and fixed control parameters (one transfer
function, one set of bias constant etc.).
2. Evaluate the fitness for each NN .

37
For more information about the NEAT and its versions developed at the University of Texas at
Austin, see: http://nn.cs.utexas.edu/?neat.
152 D. Howard and I. Muntean

3. Evaluate the fitness of each species and of the whole


population and decide which individuals reproduce
and which interspecies breeding is allowed
30 CPPN: change the transfer function for a species
with a composition of function (CPPN).
4. Create a new population (and species) of NN by EC:
new weights, new topologies, new species
5. Repeat 2-4 till convergence fitness is obtained, or
a maximum number of generations is reached, or the
human supervisor stops the selection process.

7.9 Particularism and Distributive Dispositionalism


in the NN C EC Implementation of AAMA

In the line of moral particularism, this projects assumes that almost nothing is known
a priori about the moral decision making process (be it norms, principles, rules, laws
or any form of conditioning on data), except the data. In the NEAT implementation
with CPPN, topologies and weights are not assumed a priori, although the principles
of EC are followed: selection, combination, mutation, speciation, etc. Each case
study may need a different termination condition, or a different set of properties
subject to evolution, mutation, selection, etc. One NN, at a given moment of time,
codes one possible behavior of one moral agent. In this interpretation, each NN
is a behavioral trait, therefore it is a disposition to act. The EC algorithm takes
these individuals and creates a new population of behaviors. The person who makes
the decision about the EC part of the AAMA is called the “evolver,” and can be
a different person than the programmer, or the trainer of NN. The termination
condition reflects the desired level of moral maturity for a population of AAMAs.
The evolvable features of each AAMA are associated in the present interpretation,
to the concept of “robo-virtues.” For a given scenario, these robo-virtues can differ
radically from the robo-virtues of another scenario. What is evolvable from one
generation of AAMAs to the next and what can be inherited from one scenario
to the next scenario as initial populations, are, as expected, empirical questions.
Hypothetically, some topological features of networks may be invariant over a class
of scenarios within one example, or even over different examples.
The reasons for using EC is related to autonomy. The evolution of populations
of NNs endows the AAMA with more moral autonomy than the initial population
of NNs. Although first generations of NNs depend heavily on data, successive
generations, evolved through EC, are gradually more independent. After the training
stage is over and the AAMAs are deemed as “good enough,” the population of
networks is presented with test cases (Te) outside the training set Tr; the answer
is presumably the moral decision of the AAMA for that case which is shared with
7 Artificial Moral Cognition: Moral Functionalism and Autonomous Moral Agency 153

other AAMAs and with the human trainers. Depending on the complexity of the
problem, the answer of the population of AAMAs can or cannot be outside the
“comfort zone” of the trainers.
Unlike a model with a single NN, in this interpretation, “robo-virtues” are
properties of evolved populations of NNs, and are not represented by individual
properties of one NN. During the evolution process, features of each species are
evolved and selected. Therefore, in evolutionary computation the moral competence
is distributed: their transfer functions (which defines here the CPPN component of
the Hyper-NEAT), their topologies or architectures (parts of NEAT), the weights,
etc. The AAMA agent is the one able to develop a minimal and optimal set of
virtues that solves a large enough number of problems, by optimizing each of
them. In this design, the virtue of the AAMA agent resides in the topology of its
network and the node transfer functions. Moral cognition here is obtained by the
EC process in which mutation and recombination of previous generations produce
new, unexpected individuals.

7.9.1 Loose Ends: Predictability, Modularity, Replicability


and Moral Responsibility

Several criticisms against this project are briefly assessed here. First, let us
address some concerns about the dispositional nature of this project and about
“soft computation.” As with any result of evolution, for very complex problems,
the procedure can output results that are not accessible to deterministic, “hard-
computing” algorithms. For a highly evolvable algorithm, evolution “screens off”
the initial assumptions, be they about topologies, weights, initial conditions, or the
transfer functions. A delicate balance between the mechanisms of selection that
decrease variation and those that increase variation (mutation) is needed. At the
limit, the solution of such an algorithm may be inscrutable to humans, but the very
existence of the inscrutable solution depends on the complexity of the problem, and
on the very dynamics of the NN C EC schema.
Philosophers would emphasize the counter-intuitive design of the NN C EC
architecture, and its lack of modularity. If modularity is a condition of understand-
ing, we need to face the predicament that the result of this combination of natural
selection and connectionism hinders the understanding and explanatory power of
this computational tool (Kuorikoski and Pöyhönen 2013). In future work, the
relation between the tractability of this model and its modularity deserves attention
similar to questions asked about neutrally informed models in economics (Colombo
2013). The non-deterministic and non-predictable characteristics of moral behavior
in complex cases and the difficulty to link the rational with the moral is a deep
philosophical problem. Moral learning and behavior of the AAMA are ultimately
probabilistic processes that can be improved up to a certain degree: their success is
never guaranteed. But here AAMAs are analogous to human moral agents. There
154 D. Howard and I. Muntean

will be always bad results and “too-good-to-be-true” results, but the conjecture
here is that, statistically and in the long run, such an AAMA model or something
similar to it will gain model robustness. Even for expert AAMAs there is always
“a kill switch,” should there be any reason to worry about the emergence of
morally problematic behaviors. The ideal is to obtain in the future moral behavior
commensurable to that of humans, but one should be prepared to disable the
machine anytime. There is no reason to worry about the unpredictability of the
behavior, as far as it is equally a feature of human agents.
Another problem with this proposal is the lack of a moral responsibility on the
AAMA side. Unless and until AAMAs achieve an exceptional moral standing in
their own right, and they have an independent moral standing, responsibility will
always reside with the owner, designer, manufacturer, or trainer of the populations
of AAMA.
Another criticism can be directed against H-3: in what sense is a bundle of
features of a population of NNs a “virtue” of the AAMA? Can we build a virtue
from a set of potential behaviors? At this stage of the project, such questions are
hard to address. But when interpreted in the framework of moral functionalism,
it is natural to take virtues as distributed features of the population of NNs: for
the final selected NN, the features of the population it belongs to are potential,
not manifested, features. This also suggests that virtues are dispositions, an idea
that meshes well with the dispositionalism suggested above. This attempt aims to
instantiate an agent-based, case-based, and moreover a hybrid model of the AAMA
(Allen et al. 2000).

7.10 Conclusion

This proposal offers an alternative to the rule-based, action-centric AAMA models


that dominate the machine ethics literature. The relative advantage of this AAMA
model is the central role that development and learning play in moral cognition.
Using a combination of neural networks and evolutionary computation, the model
reaches a certain level of autonomy and complexity, and the skill of active learning.
It is data-driven, rather than based on general principles, and illustrates well moral
particularism. The paper explains and supports a set of heuristic hypotheses of this
model. The “lifeboat metaphor” as a concrete implementation, its partial results, and
a prospective implementation (in NEAT) are briefly appraised.

Acknowledgments We received constructive comments, helpful feedback and encouraging


thoughts at different stages of this project. We want to thank Colin Allen, Anjan Chakravartty,
Patrick Foo, Ben Jantzen, Dan Lapsley, Michael Neelon, Tom Powers, Tristram McPherson,
Abraham Schwab, Emil Socaciu, Wendell Wallach, an anonymous reviewer for this journal,
colleagues from the University of Notre Dame, Indiana Purdue University in Fort Wayne,
University of North Carolina, Asheville, and the University of Bucharest, Romania, who supported
this project. We want also to express our gratitude to the organizers, referees, and the audience
of the following events: “Robo-philosophy 1” conference (Aarhus, Denmark, June 2014); APA
Eastern Division meeting (December 2014); APA Central Division meeting (February 2015); the
7 Artificial Moral Cognition: Moral Functionalism and Autonomous Moral Agency 155

Department of Philosophy at Virginia Tech; the CCEA/ICUB Workshop on artificial morality


at the University of Bucharest (June 2015); the IACAP-CEPE-INSEIT joint conference at the
University of Delaware (June 2015) and the IACAP conference in Ferrara, Italy (June 2016); the
Department of Philosophy at Purdue University; and the “The Ethical and Moral Considerations
in Non-Human Agents” (EMCAI) symposium of the AAAI held at Stanford University (March
2016). This paper is complemented by two papers: (Howard and Muntean 2014; Howard and
Muntean 2016). The MATLAB code and the data used are available upon request: please contact
the corresponding author.

References

Abney, K., Lin, P., & Bekey, G. (Eds.). (2011). Robot ethics: The ethical and social implications
of robotics. Cambridge: The MIT Press.
Adeli, H., & Hung, S.-L. (1994). Machine learning: Neural networks, genetic algorithms, and
fuzzy systems (1st ed.). New York: Wiley.
Adeli, H., & Siddique, N. (2013). Computational intelligence: Synergies of fuzzy logic, neural
networks intelligent systems and applications. Somerset: Wiley.
Affenzeller, M. (2009). Genetic algorithms and genetic programming: Modern concepts and
practical applications. Numerical Insights v. 6. Boca Raton: CRC Press.
Allen, C., & Wallach, W. (2009). Moral machines: Teaching robots right from wrong. Oxford/New
York: Oxford University Press.
Allen, C., Varner, G., & Zinser, J. (2000). Prolegomena to any future artificial moral
agent. Journal of Experimental & Theoretical Artificial Intelligence, 12, 251–261.
doi:10.1080/09528130050111428.
Allen, C., Smit, I., & Wallach, W. (2005). Artificial morality: Top-down, bottom-up, and hybrid
approaches. Ethics and Information Technology, 7, 149–155. doi:10.1007/s10676-006-0004-4.
Allhoff, F. (2014). Risk, precaution, and nanotechnology. In B. Gordijn & A. Mark Cutter (Eds.),
Pursuit of nanoethics (The international library of ethics, law and technology, Vol. 10, pp. 107–
130). Dordrecht: Springer.
Anderson, M., & Anderson, S. L. (Eds.). (2011). Machine ethics. Cambridge University Press.
Annas, J. (2011). Intelligent virtue. Oxford: Oxford University Press.
Arkin, R. (2009). Governing lethal behavior in autonomous robots. Boca Raton: CRC Press.
Arkin, R. (2013). Lethal autonomous systems and the plight of the non-combatant. AISB Quarterly.
Asimov, I. (1942). Runaround. Astounding Science Fiction, 29, 94–103.
Bello, P., & Bringsjord, S. (2013). On how to build a moral machine. Topoi, 32, 251–266.
doi:10.1007/s11245-012-9129-8.
Bishop, C. M. (2007). Pattern recognition and machine learning. New York: Springer.
Bostrom, N. (2012). The superintelligent will: Motivation and instrumental rationality in advanced
artificial agents. Minds and Machines, 22, 71–85. doi:10.1007/s11023-012-9281-3.
Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford: Oxford University
Press.
Brenner, T. (1998). Can evolutionary algorithms describe learning processes? Journal of Evolu-
tionary Economics, 8, 271–283. doi:10.1007/s001910050064.
Brink, D. (1989). Moral realism and the foundations of ethics. Cambridge/New York: Cambridge
University Press.
Bueno, O. (2015). Belief systems and partial spaces. Foundations of Science, 21, 225–236.
doi:10.1007/s10699-015-9416-0.
Bueno, O., French, S., & Ladyman, J. (2002). On representing the relationship between the
mathematical and the empirical. Philosophy of Science, 69, 497–518.
Calude, C. S., & Longo, G. (2016). The deluge of spurious correlations in big data. Foundations of
Science, 1–18. doi:10.1007/s10699-016-9489-4.
156 D. Howard and I. Muntean

Churchland, P. 1996. The neural representation of the social world. In L. May, M. Friedman, & A.
Clark (Eds.), Minds and morals (pp. 91–108). Cambridge, MA: MIT Press.
Clark, A. (2000). Making moral space: A reply to Churchland. Canadian Journal of Philosophy,
30, 307–312.
Clark, A. (2001). Mindware: An introduction to the philosophy of cognitive science. New York:
Oxford Univ Pr.
Clarke, S. (2005). Future technologies, dystopic futures and the precautionary principle. Ethics and
Information Technology, 7, 121–126. doi:10.1007/s10676-006-0007-1.
Coleman, K. G. (2001). Android arete: Toward a virtue ethic for computational agents. Ethics and
Information Technology, 3, 247–265. doi:10.1023/A:1013805017161.
Colombo, M. (2013). Moving forward (and beyond) the modularity debate: A network perspective.
Philosophy of Science, 80, 356–377.
Crisp, R., & Slote, M A. (1997). Virtue ethics. Oxford readings in philosophy. Oxford/New York:
Oxford University Press.
Dancy, J. (2006). Ethics without Principles. Oxford/New York: Oxford University Press.
Danielson, P. (1992). Artificial morality virtuous robots for virtual games. London/New York:
Routledge.
Danielson, P. (Ed.). (1998a). Modeling rationality, morality, and evolution. New York: Oxford
University Press.
Danielson, P. (1998b). Evolutionary models of co-operative mechanisms: Artificial morality and
genetic programming. In P. Danielson (Ed.), Modeling rationality, morality, and evolution.
New York: Oxford University Press.
Dawid, H. (2012). Adaptive learning by genetic algorithms: Analytical results and applications to
economical models. Berlin: Springer.
De Jong, K. A. (2006). Evolutionary computation. Cambridge: MIT Press: A Bradford Book.
DeMoss, D. (1998). Aristotle, connectionism, and the morally excellent brain. In 20th WCP
proceedings. Boston: Paideia Online Project.
Dewey, D. (2011). Learning what to value. In J. Schmidhuber & K. Thórisson (Eds.), Artificial
General Intelligence. 4th International Conference, AGI 2011 Mountain view, CA, USA, August
3–6, 2011 Proceedings (pp. 309–314). Springer Berlin Heidelberg.
Doris, J. M. (1998). Persons, situations, and virtue ethics. Noûs, 32, 504–530.
doi:10.1111/0029-4624.00136.
Enemark, C. (2014). Armed drones and the ethics of war: Military virtue in a post-heroic age (War,
conduct and ethics). London: Routledge.
Evins, R., Vaidyanathan, R., & Burgess, S. (2014). Multi-material compositional pattern-producing
networks for form optimisation. In A. I. Esparcia-Alcázar & A. M. Mora (Eds.), Applications
of evolutionary computation. Berlin/Heidelberg: Springer.
Flanagan, O. J. (2007). The really hard problem: Meaning in a material world. Cambridge: MIT
Press.
Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds & Machines, 14,
349–379.
Franklin, S., & Graesser, A. (1997). Is it an agent, or just a program? a taxonomy for autonomous
agents. In J. P. Müller, M. J. Wooldridge, & N. R. Jennings (Eds.), Intelligent agents III agent
theories, architectures, and languages (pp. 21–35). Springer Berlin Heidelberg.
Galliott, J. (2015). Military robots: Mapping the moral landscape. Surrey: Ashgate Publishing Ltd.
Gärdenfors, P. (2000). Conceptual spaces the geometry of thought. A Bradford Book. Cambridge:
MIT Press.
Gauthier, D. (1987). Morals by agreement. Oxford: Oxford University Press.
Gips, J. (1995). Towards the ethical robot. In K. M. Ford, C. N. Glymour, & P. J. Hayes (Eds.),
Android epistemology. Menlo Park: AAAI Press/MIT Press.
Goertzel, B., & Pennachin, C. (2007). Artificial general intelligence (Vol. 2). Berlin: Springer.
Gray, K., Young, L., & Waytz, A. (2012). Mind perception is the essence of morality. Psychological
Inquiry, 23, 101–124. doi:10.1080/1047840X.2012.651387.
7 Artificial Moral Cognition: Moral Functionalism and Autonomous Moral Agency 157

Guarini, M. (2006). Particularism and the classification and reclassification of moral cases. IEEE
Intelligent Systems, 21, 22–28. doi:10.1109/MIS.2006.76.
Guarini, M. (2011). Computational neural modeling and the philosophy of ethics reflections on the
particularism-generalism debate. In M. Anderson & S. L. Anderson (Eds.), Machine Ethics.
Cambridge: Cambridge University Press.
Guarini, M. (2012). Conative dimensions of machine ethics: A defense of duty. IEEE Transactions
on Affective Computing, 3, 434–442. doi:10.1109/T-AFFC.2012.27.
Guarini, M. (2013a). Case classification, similarities, spaces of reasons, and coherences. In: M.
Araszkiewicz M, & J. Šavelka (Eds.), Coherence: Insights from philosophy, jurisprudence and
artificial intelligence. Springer, pp 187–201
Guarini, M. (2013b). Moral case classification and the nonlocality of reasons. Topoi, 32, 267–289.
doi:10.1007/s11245-012-9130-2
Gunkel, D. J. (2012). The machine question: Critical perspectives on AI, robots, and ethics.
Cambridge: MIT Press.
Hardin, G. J. (1974). Lifeboat ethics. Bioscience, 24, 361–368.
Hoffman, M. (1991). Empathy, social cognition, and moral action. In W. M. Kurtines, J. Gewirtz, &
J. L. Lamb (Eds.), Handbook of moral behavior and development: Volume 1: Theory. Hoboken:
Psychology Press.
Holland, J. H. (1975). Adaptation in natural and artificial systems: An introductory analysis with
applications to biology, control, and artificial intelligence. (2nd ed.). Bradford Books, 1992.
University of Michigan Press.
Horgan, T., & Timmons, M. (2009). Analytical moral functionalism meets moral twin earth. In I.
Ravenscroft (Ed.), Minds, ethics, and conditionals. Oxford: Oxford University Press.
Howard, D., & Muntean, Ioan. (2014). Artificial moral agents: Creative, autonomous, social. An
approach based on evolutionary computation. In Proceedings of Robo-Philosophy. Frontiers of
AI and Applications. Amsterdam: IOS Press.
Howard, D., & Muntean, I. (2016). A minimalist model of the artificial autonomous moral agent
(AAMA). The 2016 AAAI spring symposium series SS-16-04: Ethical and moral considerations
in non-human agents. The Association for the Advancement of Artificial Intelligence (pp. 217–
225).
Human Rights Watch. (2013). US: Ban fully autonomous weapons. Human Rights Watch.
Hursthouse, R. (1999). On virtue ethics. Oxford/New York: Oxford University Press.
Jackson, F. (1998). From metaphysics to ethics a defence of conceptual analysis. Oxford/New
York: Clarendon Press.
Jackson, F., & Pettit, P. (1995). Moral functionalism and moral motivation. The Philosophical
Quarterly, 45, 20–40. doi:10.2307/2219846.
Johnson, M. (2012). There is no moral faculty. Philosophical Psychology, 25, 409–432.
Kahneman, D. (2011). Thinking, fast and slow. New York: Farrar, Straus and Giroux.
Kitcher, P. (2011). The ethical project. Cambridge: Harvard University Press.
Koza, J. R. (1992). Genetic programming: On the programming of computers by means of natural
Selection. Cambridge: MIT Press.
Kuorikoski, J., & Pöyhönen, S. (2013). Understanding nonmodular functionality: Lessons from
genetic algorithms. Philosophy of Science, 80, 637–649. doi:10.1086/673866.
Ladyman, J., Lambert, J., & Wiesner, K. (2012). What is a complex system? European Journal for
Philosophy of Science, 3, 33–67. doi:10.1007/s13194-012-0056-8.
Legg, S., & Hutter, M. (2007). Universal intelligence: A definition of machine intelligence. Minds
and Machines: Journal for Artificial Intelligence, Philosophy, and Cognitive Science, 17,
391–444.
Lewis, D. K. (1983). Philosophical papers. New York: Oxford University Press.
Litt, A., Eliasmith, C., & Thagard, P. (2008). Neural affective decision theory: Choices, brains, and
emotions. Cognitive Systems Research, 9, 252–273. doi:10.1016/j.cogsys.2007.11.001.
Liu, H., Gegov, A., & Cocea, M. (2016). Rule based systems for big data (Studies in big data. Vol.
13). Cham: Springer International Publishing.
McDowell, J. (1979). Virtue and reason. The Monist, 62, 331–350.
158 D. Howard and I. Muntean

Michael, J. (2014). Towards a consensus about the role of empathy in interpersonal understanding.
Topoi, 33, 157–172. doi:10.1007/s11245-013-9204-9.
Mitchell, T. M. (1997). Machine learning (1st ed.). New York: McGraw-Hill Sci-
ence/Engineering/Math.
Mitchell, S. D. (2012). Unsimple truths: Science, complexity, and policy. Reprint edition. Chicago:
University Of Chicago Press.
Mitra, S., Das, R., & Hayashi, Y. (2011). Genetic networks and soft computing. IEEE/ACM Trans-
actions on Computational Biology and Bioinformatics, 8, 94–107. doi:10.1109/TCBB.2009.39.
Monsó, S. (2015). Empathy and morality in behaviour readers. Biology and Philosophy, 30,
671–690. doi:10.1007/s10539-015-9495-x.
Moor, J. H. (2006). The nature, importance, and difficulty of machine ethics. Intelligent Systems,
IEEE, 21, 18–21.
Muntean, I. (2014). Computation and scientific discovery? a bio-inspired approach. In H. Sayama,
J. Reiffel, S. Risi, R. Doursat, & H. Lipson (Eds.), Artificial Life 14. Proceedings of the
fourteenth international conference on the synthesis and simulation of living systems. New
York: The MIT Press.
Murphy, K. P. (2012). Machine learning a probabilistic perspective. Cambridge: MIT Press.
Nichols, S. (2001). Mindreading and the cognitive architecture underlying altruistic motivation.
Mind & Language, 16, 425–455. doi:10.1111/1468-0017.00178.
Nickles, T. (2009). The strange story of scientific method. In J. Meheus & T. Nickles (Eds.), Models
of discovery and creativity (1st ed., pp. 167–208). Springer.
Nussbaum, M. C. (1986). The fragility of goodness: luck and ethics in Greek tragedy and
philosophy. Cambridge: Cambridge University Press.
Pereira, L. M., & Saptawijaya, A. (2016). Programming machine ethics Springer International
Publishing.
Railton, P. (1986). Moral realism. The Philosophical Review, 95, 163–207. doi:10.2307/2185589.
Rawls, J. (1958). Justice as fairness. The Philosophical Review, 67, 164–194.
doi:10.2307/2182612.
Regan, T. (1983). The case for animal rights. Berkeley: University of California Press.
Richards, D. (2014). Evolving morphologies with CPPN-NEAT and a dynamic substrate.
In ALIFE 14: Proceedings of the fourteenth international conference on the syn-
thesis and simulation of living systems (pp. 255–262). New York: The MIT Press.
doi:10.7551/978-0-262-32621-6-ch042.
Robinson, Z., Maley, C. J., & Piccinini, G. (2015). Is consciousness a spandrel? Journal of the
American Philosophical Association, 1, 365–383. doi:10.1017/apa.2014.10.
Russell, D. C., & Miller, C. B. (2015). How are virtues acquired? In M. Alfano (Ed.), Current
controversies in virtue theory (1st ed.). New York: Routledge.
Savulescu, J., & Maslen, H. (2015). Moral enhancement and artificial intelligence: Moral AI?
In J. Romportl, E. Zackova & J. Kelemen (Eds.), Beyond artificial intelligence. Springer
International Publishing.
Schmidt, M., & Lipson, H. (2009). Distilling free-form natural laws from experimental data.
Science, 324, 81–85. doi:10.1126/science.1165893.
Shalizi, C. R., & Crutchfield, J. P. (2001). Computational mechanics: Pattern and prediction,
structure and simplicity. Journal of Statistical Physics, 104, 817–879.
Sharkey, N. E. (2012). The evitability of autonomous robot warfare. International Review of the
Red Cross, 94, 787–799. doi:10.1017/S1816383112000732.
Sidgwick, H. (1930). The methods of ethics. London: Macmillan and Co, Ltd.
Sørensen, M. H. (2004). The genealogy of biomimetics: Half a century’s quest for dynamic IT. In
A. J. Ijspeert, M. Murata & N. Wakamiya (eds.) Biologically inspired approaches to advanced
information technology. Lausanne/Berlin/New York: Springer.
Stanley, K. O., & Miikkulainen, R. (2002). Evolving neural networks through augmenting
topologies. Evolutionary Computation, 10, 99–127. doi:10.1162/106365602320169811.
Stanley, K. O., D’Ambrosio, D. B., & Gauci, J. (2009). A hypercube-based encoding for evolving
large-scale neural networks. Artificial Life, 15, 185–212.
7 Artificial Moral Cognition: Moral Functionalism and Autonomous Moral Agency 159

Strawser, B. J. (2013). Guest editor’s introduction the ethical debate over cyberwar. Journal of
Military Ethics, 12, 1–3. doi:10.1080/15027570.2013.782639.
Strevens, M. (2008). Depth: An account of scientific explanation. Cambridge: Harvard University
Press.
Suárez, M., & Cartwright, N. (2008). Theories: Tools versus models. Studies in History and
Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics, 39, 62–81.
Sun, R. (2013). Moral judgment, human motivation, and neural networks. Cognitive Computation,
5, 566–579. doi:10.1007/s12559-012-9181-0.
Suthaharan, S. (2016). Machine learning models and algorithms for big data classification. Boston:
Springer US.
Swanton, C. (2003). Virtue ethics. Oxford: Oxford University Press.
Tomassini, M. (1995). A survey of genetic algorithms. Annual Reviews of Computational Physics,
3, 87–118.
Tonkens, R. (2009). A challenge for machine ethics. Minds & Machines, 19, 421–438.
doi:10.1007/s11023-009-9159-1.
Tonkens, R. (2012). Out of character: On the creation of virtuous machines. Ethics and Information
Technology, 14, 137–149. doi:10.1007/s10676-012-9290-1.
Trappl, R., (Ed). (2013). Your virtual butler. Berlin/Heidelberg: Springer.
Trappl, R. (2015). A construction manual for Robots’ ethical systems: Requirements, methods,
implementations. Cham: Springer.
Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59, 433–460.
Turing, A. (1992). Mechanical Intelligence. In D. C. Ince (Ed.), The collected works of A. M.
Turing: Mechanical intelligence. Amsterdam/New York: North-Holland.
Wallach, W.. (2014). Ethics, law, and governance in the development of robots. In R. Sandler (Ed.),
Ethics and emerging technologies (pp. 363–379). New York: Pagrave
Wallach, W. (2015). A dangerous master: How to keep technology from slipping beyond our
control. New York: Basic Books.
Wallach, W., Franklin, S., & Allen, C. (2010). A conceptual and computational model of moral
decision making in human and artificial agents. Topics in Cognitive Science, 2, 454–485.
doi:10.1111/j.1756-8765.2010.01095.x.
Watson, R. A., & Szathmáry, E. (2016). How can evolution learn? Trends in Ecology & Evolution.
doi:10.1016/j.tree.2015.11.009.
Wingspread participants. (1998). Wingspread statement on the precautionary principle.
Yao, X. (1999). Evolving artificial neural networks. Proceedings of the IEEE, 87, 1423–1447.
doi:10.1109/5.784219.
Zadeh, L. A. (1994). Fuzzy logic, neural networks, and soft computing. Communications of the
ACM, 37, 77–84. doi:10.1145/175247.175255.
Zangwill, N. (2000). Against analytic moral functionalism. Ratio: An International Journal of
Analytic Philosophy, 13, 275–286.
Zenker, F., & Gärdenfors, P. (Eds.). (2015). Applications of conceptual spaces. Cham: Springer.

Potrebbero piacerti anche