Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
De 01 a 05 de setembro de 2020
Caderno de Resumos
Filosofia Analítica no Século XXI:
Novos Desenvolvimentos
Abstracts Book
Analytical Philosophy in the 21st Century:
New Developments
Sofia Stein • Viviane Braga
(Eds.)
Caderno de Resumos
Abstracts Book
Porto Alegre
Editorial Philosophia
2020
Comissão organizadora:
Sofia Inês Albornoz Stein (Presidente da SBFA)
Andrea Faggion (Tesoureira da SBFA)
Caio Casagrande (Suporte de Mídia e Mediador)
Conferencistas:
Dr. Barry C. Smith (Institute of Philosophy, UoL, London, UK)
Dr.ª Juliet Floyd (Boston University, Boston, MA USA)
Dr.ª Maria Eunice Quilici Gonzalez (Universidade Estadual Paulista, São Paulo, BR)
Dr.ª Nélida Gentile (Universidad de Buenos Aires, Buenos Aires, AR)
Dr.ª Teresa Marques (Universitat de Barcelona, Barcelona, ES)
Dr. Waldomiro J. Silva Filho (Universidade Federal da Bahia, Salvador, BR)
Dr. Walter Carnielli (Universidade Estadual de Campinas, Campinas, BR)
Diretoria 2018-2020:
Sofia Inês Albornoz Stein (UNISINOS) - Presidente
Marcos Silva (UFAL) - Vice-presidente
Célia Cristina Patrício Teixeira (UFRJ) - Secretária Geral
Andréa Luisa Bucchile Faggion (UEL) - Tesoureira
Nara Miranda de Figueiredo (USP) - Secretária Adjunta
Conselho Fiscal:
Eros Moreira de Carvalho (UFRGS)
Ludovic Soutif (PUC-Rio)
Waldomiro José da Silva Filho (UFBA)
Programa
01 DE SETEMBRO – TERÇA-FEIRA
9h – 10h30min
10h45min - 12h45min
Keynote Speakers
14h – 16h30min
16h45min - 18h45min
19h45min – 22h15min
02 DE SETEMBRO – QUARTA-FEIRA
9h – 10h30min
10h45min - 12h45min
Keynote Speakers
16h45min - 18h45min
19h45min – 22h15min
03 DE SETEMBRO – QUINTA-FEIRA
9h – 10h30min
10h45min - 12h45min
Keynote Speakers
14h – 16h30min
16h45min - 18h45min
19h45min – 22h15min
The scope and epistemological limits of the use of Big Data analytics in the context
of scientific investigation
Dr.ª Mariana Vitti Rodrigues
UNESP/Marília
04 DE SETEMBRO – SEXTA-FEIRA
9h – 10h30min
10h45min - 12h45min
Keynote Speakers
14h – 16h30min
19h45min – 22h15min
9h – 10h30min
10h45min - 12h45min
14h – 16h30min
16h45min - 18h45mi
SUMÁRIO
CONFERÊNCIAS 23
KEYNOTE SPEAKERS 23
THE ROLE OF SMELL IN CONSCIOUSNESS: THE NOT SO NEGLECTED SENSE 23
Barry Smith, PhD 23
WITTGENSTEIN AND TURING 23
Juliet Floyd 23
ANALYTICAL PHILOSOPHY IN THE 21ST CENTURY: EPISTEMOLOGICAL IMPLICATIONS OF VIRTUAL REALISM
24
Maria Eunice Quilici Gonzalez 24
CUASI REGULARISMO Y DEFLACIONISMO CAUSAL 24
Nélida Gentile 24
THE EXPRESSION OF HATE IN HATE SPEECH 24
Teresa Marques 24
DIÁLOGO E REFLEXÃO: UMA PERSPECTIVA EPISTÊMICA NÃO INDIVIDUALISTA SOBRE A REFLEXÃO 25
Waldomiro J. Silva Filho 25
NEW LOGICS, NEW PROBABILITIES: WHAT DO THEY MEAN? 25
Walter Carnielli 25
MESAS REDONDAS 27
ROUNDTABLES 27
1. SOCIAL EPISTEMOLOGY AND NEUROPHILOSOPHY 27
OF NON-NATURALIST NORMATIVE FACTS AND METANORMATIVE FACT-FREE PRINCIPLES 27
Claiton Costa 27
Nythamar de Oliveira 27
COUNTERFACTUALS, DECISION MAKING, AND ETHICAL ALIGNMENT IN ARTIFICIAL INTELLIGENCE 28
Nicholas Kluge Correa 28
Nythamar de Oliveira 28
SOME BRAIN MECHANISMS THAT UNDERLY EMOTIONAL CONSCIOUSNESS AND THEIR PHILOSOPHICAL
IMPLICATIONS 29
Diogo Massmann 29
Nythamar de Oliveira 29
2.PROPER NAMES AND IDENTITY 29
Smell is often regarded as a neglected sense, both scientifically and in ordinary perceptual
experience. McGann (2017) suggests the relative neglect of olfaction in psychology and
neuroscience is due to a 19th century myth perpetuated by Broca that humans have poor sense of
smell. In fact, humans have a good sense of smell, but scientific attitudes to smell cannot not
explain why, ordinarily, people neglect their sense of smell in everyday experience. One proposal
to account for this fact is that unlike vision, audition, taste and touch, smell features little in
conscious awareness. According to this minimalist view, olfaction plays a limited role in conscious
daily life, only appearing when we smell something overpowering, or deliberately sniff at
perfumes, foods or wines. Another view, adopted by several olfaction scientists, takes the human
sense of smell to be largely unconscious, only occasionally breaks through into conscious
awareness, but having an impact of cognition and emotion (Sela and Sobel 2012) Another view is
that we only become consciously of an odour, if it is unexpected or potentially dangerous, (Köster,
Møller and Mojet 2014). I will argue these views are mistaken, in whole and in part. Smell plays
a larger role in everyday conscious life than is commonly thought though it goes unrecognized,
and this requires us to explain how an aspect of consciousness can be overlooked. Part of the
explanation involves what smell contributes to our perception of the flavours of what we eat and
drink, which features extensively in our conscious awareness, but is not recognized as due to
smell. The other part points to a lack of meta-cognitive awareness of the olfactory dimension of
everyday experience. More generally, the case of olfactory experience provides a useful
opportunity to explore the relationship between consciousness and attention.
A philosophical reconstruction of the mutual impact of Wittgenstein and Turing upon one
another. Recognizeably Wittgensteinian features of Turing’s diagonal argumentation and
machine-model of human computation in “On computable numbers, with an application to the
Entscheidungsproblem” (1936/7) and his argumentation in “Computing Machinery and
Intelligence” (1950) are drawn out, emphasizing the anti-psychologistic, ordinary language and
intersubjectivist elements of Turing’s conception. These were indebted, on my story, to exposure
to Wittgenstein’s lectures and dictations. Next Wittgenstein’s manuscripts on the foundations of
mathematics 1934-1942 are interpreted in light of the impact of Turing’s analysis of logic upon
them. Themes will include the emergence of rule-following issues, the notion of Lebensform, and
anti-psychologism.
The development of information technologies, enhanced in the 21st century by Big Data analytics
and machine learning, seems to be shifting human perception, originally anchored in
environmentally situated and historically incorporated action, to place it in a digital information
space, structured mainly by media, economic, political, and social interests. Inspired by Ryle’s
considerations on “knowing how” to do things and its relation to “knowing truths”, we are going
to discuss the new trend of Virtual Realism (a la Chalmers) and the Digital Constructivism in
science (a la Floridi). Attention will be given to associative methods and the modelling employed
in Big Data analytics, which deals with the rapidly increasing amount of a massive variety of data
available for prediction, control and explanation of events. We will argue that studies on the
concepts of correlation and causality, under development in the analytical perspective of Big Data
studies, with emphasis on the notions of disposition and counterfactual, indicate strong signs of
emergence of a new logic of reasoning structuring rationality in science and common sense.
Aspects of the new logic of volume, velocity, and variety, characteristic of Big Data, will be
discussed to provide guidelines for the following question: What could be the positive and
negative epistemological and ethical consequences of embracing virtual realism grounded upon
Big Data in the study of human perception/action?
Se analizan algunas de las concepciones más relevantes que se han ofrecido acerca de las leyes y
se señalan ciertas características que afectan su plausibilidad. Se presenta un punto de vista
alternativo inspirado en la posición conocida como el Nuevo Hume. El enfoque defendido es una
postura intermedia entre la concepción regularista estándar y el antirrealismo nomológico de
Mumford. Creemos que la propuesta sortea con éxito las dificultades mencionadas y abre un
nuevo espacio teórico dentro de la disputa sobre las leyes de la naturaleza.
In this talk, I offer an hypothesis about how hate speech expresses hate. I will hold that not all
expressions of hate are hate speech, and that hate speech can be expressive of hate even if some
speakers lack the sentiment. In so doing, I try to suggest how hate speech can correlate with hate
crimes. I combine an account of the illocutionary structure of conversational contexts and of the
Esta apresentação visa motivar uma concepção epistêmica não-individualista sobre a reflexão. A
proposta é não-individualista porque (a) não considera apenas o desempenho metacognitivo
individual, (b) refere-se a uma situação em que duas ou mais pessoas estão em desacordo
dialógico sobre o mesmo assunto ou alvo da proposta; (c) essas pessoas realizam uma disputa
com base no espaço de conversação e têm o direito legítimo de esperar uns do outros o
compromisso com a busca da verdade, evitar os erros e o entendimento. Chamo esta proposta de
Perspectiva Dialética sobre Reflexão. Segundo essa perspectiva, reflexão é uma operação
intelectual consciente e intencional através da qual uma pessoa toma conhecimento do conteúdo
de crenças disputadas em uma troca dialógica ou interpessoal, envolvendo tanto suas próprias
crenças quanto as crenças de seus interlocutores. Para essa proposta, a reflexão produz o bem
epistêmico de evitar vícios epistémicos e promover a moderação epistémica.
Logical pluralism is the view that there is more than one correct logic. As probability theory can
be seen as extending logical systems (at least in the opinion of many) probabilistic pluralism is
perfectly justifiable.
When we vary the logical bases on which probability theories are founded, some probabilistic
principles considered as sacred are shaken: contradictions may have non- zero probabilities
when the Principle of Explosion is no longer maintained, and the probability of a negative
event is not necessarily its complementary probability when the Law of Excluded Middle is
challenged.
These new probability measures pose new mathematical and philosophical difficulties, but I
argue that such expanded probability theories are useful and contribute to the arsenal of
rationality. Considering that probability theory is essential for statistics, which in turn play a
crucial role in philosophy of science, probabilistic pluralism represents a significant impact
in the philosophy of science.
Social epistemology, as over against traditional, individual epistemology, has sought to reexamine
knowledge and justified belief, so as “to pursue the truth (whichever truth is in question) with the
help of, or in the face of, others. It is also concerned with truth acquisition by groups, or collective
agents.” (Stanford Encyclopedia of Philosophy) Neurophilosophy or the philosophy of
neuroscience and cognitive science has become an interdisciplinary research in neuroscientific
theories of brain/mind and mental phenomena, especially insofar as mental states relate to the
brain, so as to comprise theories of cognition, language, representation, thought, consciousness,
memory, decision-making, beliefs, emotions, rationality, self, cognitive and moral psychology,
moral philosophy and metaethics, and more recently social epistemology. In our research
program, we set out to investigate whether the social conditioning of human behavior leads to
social norms-abiding behavior, or the other way around, as sociality is undermined by
neurobiological conditioning. This interdisciplinary field of research in analytic philosophy
allows thus for in-depth discussions of both theoretical, philosophical texts and empirical
findings of neuroscience, psychology, economics, ethics, and social sciences relating to problems
of naturalism, sociality, and normativity, including decision-making processes, rational-choice
theory, game theory, and artificial intelligence.
How does one draw the line between non-normative facts and normative facts, or between
naturalism and normativity, without realizing that we’re just somehow recasting the Kantian
opposition between nature and freedom, facts vs values, or Hume’s own opposition between (a
priori analytic) relations of ideas and (a posteriori synthetic) matters of fact? Derek Parfit tried to
solve this conundrum by resorting to a non-naturalist, cognitivist standpoint, supposedly safe
from charges of dogmatism and relativism, as he systematically opposes Non-Cognitivism to
Irreducibly Normative Truths. We argue that the problem with Parfit’s monumental attempt to
conjugate the very best variants of Kantianism, Contractualism, and Consequentialism in On
What Matters is precisely that he is assuming too much insofar as normativity, values, and facts
are concerned. In the same vein, G.A. Cohen claims, contra Rawls, that facts ground normative
principles only in the light of an ultimate, logically prior, normative principle. Cohen does this on
the basis of three premises: there is always an explanation of why a fact grounds a principle; that
explanation always implies a further normative principle; the regress started here does
continue ad infinitum. Taking for granted Cohen's premises, we offer a counterexample in which
a metanormative principle—a principle that tells when and only normative principles express
Counterfactuals have become a major area of interest and interdisciplinary application, especially
in logic, semantics, psychology, decision theory, game theory, and artificial intelligence. In the
case of AI, counterfactual models are allowed for the modeled agents so as to figure out how the
decision-making procedures can occur, respecting the premises of ethical behavior. Intelligent
systems increasingly become an integrated part of our society (complex environments), as the
concept of idealized decision becomes increasingly important, especially for security reasons. In
this paper, we offer some examples of how an AI algorithm can perform a specific task of
executing totally unexpected actions, in order to highlight our difficulty of controlling and
predicting AI agency. If one seeks to improve her control over the impact that intelligent systems
might have on the environment in which they are inserted, a better understanding of
counterfactuals is necessary. Indeed, one of the shortcomings in the area of decision theory
involves the formalization of reasoning and counterfactual logic, where standard models of
decision making (such as game-theoretic) cannot be simply used to describe an idealized decision
procedure. After all, what is meant by an idealized decision procedure? It is necessary that we
find a formal definition or it is necessary to make a good decision, so that the heuristics with this
objective are constructed. However, this definition leads us to the still unknown nature of the use
of counterfactuals in the field of artificial security (AI), insofar as the counterfactual decision
processes are applicable to artificial systems with human interests. It is necessary that artificial
intelligent agents have their values aligned with human values, given that one cannot expect AI
to develop human moral values simply because of its intelligence, according to the so-called
Orthogonality Thesis.
Keywords: AI Ethics, Complexity, Counterfactuals, Decision Theory, Social Epistemology
The possibility of unconscious emotional processes implies a relative disconnect between the
explanatory models of emotions and consciousness. Although conscious processes are always
accompanied by emotions, it would be possible to have emotions that are not accompanied by
consciousness as well. Jaak Panksepp builds on the idea that subcortical circuits that control
innate behaviors and related physiological responses are the core of the states of emotional
consciousness. Panksepp calls upon conscious states that the organism is unaware, but cannot
introspect or talk about, similarly to Ned Block. So, Panksepp’s emotion theory can be conceived
as a First-Order Emotion Theory. Antonio Damásio highlighted the importance of body sensing
areas of the cortex giving rise to feelings, but Damásio reviewed it arguing that core feelings are
products of subcortical circuits that receive primary sensory signals from the body. As over
against Panksepp, Damásio assumes that these subcortical action systems operate
nonconsciously. On the other hand, Joseph LeDoux argues that these circuits are better
considered through nonconscious First-Order Representation. Unlike Damásio, LeDoux´s
proposal pointed out that in some cases (e.g., backward masking or continuous flash
suppression), emotions and consciousness can be separate processes, maintained by two different
mechanisms. This does not mean that defensive survival circuits play no role in consciousness,
but that they are not directly responsible for the conscious experience itself. This proposal seems
better interpreted as a High-Order Representation assembly of conscious feelings supported by
General Networks of Cognition.
Keywords: Emotions, First/High-Order Representations, General Networks of Cognition,
Subcortical Systems.
Since the publication of [3], identity and proper names have played crucial roles in analytical
philosophy. This round table proposes to debate some problems with respect to them.
One of the main philosophical problems related to identity has been com- monly called Frege’s
puzzle. The puzzle can be summarized as follows: how can we characterize the real
informativeness of non-trivial true identities? But this puzzle can be generalized to quantifiers.
Whilst in Frege’s original case an individual ignores the co-denotation of two proper names, in
the gen- eralized version of the puzzle an individual ignores the co-extensionality of two domains
of quantification.
There are many ways to respond to Frege’s puzzle but Jackson’s 2- dimensional semantics
proposed in [5] presents an interesting solution. The talk Generalized Fregean Puzzle and Two-
Frege’s puzzle can be formulated as follows: for any true identity a=b, how can someone do not
know it? In other words, how can we characterize the real informativeness of non-trivial true
identities? Despite its centrality in the contemporary debate on the semantics of proper names,
Frege’s puzzle is just the tip of the iceberg of a larger problem on the informativeness of logical
truths: how is it possible for someone to ignore that a given logical validity is true? In fact, we can
define a generalized version of Frege’s puzzle. The informativeness of an identity of the form a =
b is revealed by the failure of substitution of identicals in intensional contexts: a = b is informative
because it is possible for an individual to jointly believe φ(a) and ¬φ(b), for some formula φ(x).
In the same way, it is sometimes possible for someone to jointly believe ∀xφ(x) and ∃x¬φ(x), and
this is just a generalization of the problem proposed by Frege. Whilst in Frege’s original case an
individual ignores the co-denotation of two proper names, in the generalized version of the puzzle
a person ignores the co-extensionality of two domains of quantification.
There are several ways of answering Frege’s puzzle, but 2-dimensional semantics [1] offers us an
especially interesting solution. This framework characterizes the semantic contents of proper
names in two levels, namely, the level of character and the level of content. The content of a name
is its reference, which it denotes rigidly. However, the link of a name with its reference is
conventional, i.e., the name could have been linked to a different reference in a counterfactual
context of linguistic use. This contingent aspect of the semantics of a proper name is captured by
its character, a function that associates some content to the name in each context of linguistic use.
Based on this analysis, we can elaborate a solution for Frege’s puzzle: even though, in the actual
context of linguistic use, a = b is true, often the epistemic situation of an agent doesn’t exclude
counterfactual contexts in which the characters of a and b select distinct references for these
names.
2-dimensional semantics offers a monist solution to Frege’s puzzle, i.e., a kind of solution that
doesn’t require the commitment with a distinction between alethic and epistemic modalities. On
the other hand, mainstream accounts of the problem of the informativeness of logical truths in
general involve some kind of modal pluralism (e.g., [2, 3]). Hence, from a monist point of view, a
generalization of the 2-dimensional solution of Frege’s puzzle to the wider problem of the
informativeness of logical truths would be very important. As a partial contribution in this
research program, in this talk I will present how is it possible to give a 2-dimensional account of
the generalized Fregean puzzle. More specifically, I will argue that our use of quantifiers is also
mediated by characters which fix quantificational domains in every context of linguistic use.
Then, I will show that the epistemic situation of a regular individual does not exclude a priori
counterfactual contexts of lin- guistic use in which the characters associated with the quantifiers
occurring in a formula fix different quantificational domains.
Keywords: Frege’s puzzle; 2-dimensional semantics; Identity; Logical knowledge.
References
[1] F. Jackson. Why we need A-intensions. Philosophical Studies, 118(1- 2):257–277, 2004.
According to the causal theory of reference, a rigid designator is that which designates the same
individual in every possible world in which that indivi- dual exists. It is not clear how one should
construe the expression “the same” in the preceding sentence. In [2], Kripke was very dismissive
of the idea that criteria of identity, and especially transworld criteria of identity, are needed at all
for the determination of a proper name’s reference. Kripke presented us with an account in which
individuals are projected onto possible world by stipulation, and thus their identity conditions
remain somewhat primitive.
On the other hand, Michael Dummett in [1] argued that a criterion of identity is a component of
the Fregean senses of proper names. Dummett also distinguished proper names from other
classes of terms based on the constitution of their Fregean senses. Unlike proper names, whose
senses contain only criteria of identity for their referents, the senses of adjectival predicates
contain only criteria of application, while those of sortal predicates contain both criteria of
application and of identity.
In this talk, I argue that the Frege–Dummettian account fares better than the Kripkean one, at
least in what concerns a certain class of names, biological taxon names. In order to accomplish
this goal, I must first present the thesis, which is widely accepted in philosophy of biology,
according to which biological taxa are individuals. Second, I suggest an amendment for the
Frege–Dummettian account: that taxon names, despite being proper names, have their senses
constituted by both criteria of identity and criteria of application. I conclude with some
philosophical problems related to the taxonomic nomenclature that can be solved using the
Frege–Dummettian account.
Keywords: theories of reference; rigid designators; proper names; crite- ria of identity; criteria
of application.
References
[1] Michael Dummett. Frege: Philosophy of Language. London: Duckworth, 1973.
[2] S. A. Kripke. Naming and Necessity. Library of Philosophy & Logic. Blackwell Publishers,
1981.
3. MESA-REDONDA: PROBLEMAS EM FILOSOFIA DA MEMÓRIA
3.3. O QUE FAZ COM QUE UM ATO MENTAL CONTE COMO UMA MEMÓRIA
Susie Kovalczyk dos Santos
Doutoranda do Programa de Pós-Graduação em Filosofia da UFSM
lczyk.susie@gmail.com
The aim of this talk is to present some recent developments of the approach to paraconsistency
in terms of preservation of evidence. A logic of formal inconsistency called LETF (the logic of
evidence and truth based on FDE) will be introduced. LETF is an extension of the well-known
Belnap-Dunn 4-valued logic that treats positive and negative non-conclusive evidence as two
primitive and non-complementary notions. It is equipped with a classicality operator that is able
to distinguish scenarios of conclusive evidence, subjected to classical logic, from non-classical
scenarios where the evidence available is non-conclusive or unreliable. Decidable bi-valued
semantics, a probabilistic semantics, and Kripke models for LETF will be presented and
discussed.
Here, we will investigate the possibility of using Sundell and Plunkett's seminal idea of
metalinguistic negotiation (2013) in the discussion of revision of logic. The concept of
Metalinguistic Negotiation was already successfully applied into discussions concerning
metaethics (Burgess and Plunkett 2013a; 2013b), aesthetics (Sundell 2017), metaphysics
(Plunkett, 2015; Thomasson, 2017) and law (Plunkett and Sundell, 2014). It is noteworthy that
metalinguistic negotiations are compatible with developments of a social perspective of logic as
championed by Brandom's (1994, 2000, 2008) and Dutilh Novaes' work (2013, 2015, 2016),
because it investigates logic in a non-individualistic and anti-representationalist platform, by
emphasizing the notion of normativity to understand the nature of logic. In particular, the very
notion of metalinguistic negotiation can be tested to advance a multi-agent platform for
understanding the kind of disagreement we have, when we are concerned with the possibility of
revision of logic. The main idea is that philosophical disputes concerning revision of logics should
be taken as metalinguistic disputes. In other words, discussions to motivate revising logic take
place on the metalinguistic level, since people longing to revise logical principles do so by
negotiating the way we use our concepts, giving special attention to what we do or how we use
some terms and not by referring logical vocabulary to some independent structure of reality to
test its correctness.
The purpose of this presentation is to discuss and evaluate the main arguments developed in
Kripke’s unpublished lectures “Rigid Designation and the Contingent A Priori: The Meter Stick
Revisited” delivered at the University of Notre Dame in 1986. In these lectures Kripke considers
the objections raised by two of his main critics against his classical examples of contingent a
priori truths: the objections raised by Donnellan (1979) against the Neptune example, and the
objections raised by Salmon (1986) and Plantinga (1975) against the standard meter case.
Donnellan’s point is that one cannot have any de re knowledge concerning Neptune based only
on the baptism because there is something missing, i.e., acquaintance with the referred object.
And Salmon and Planting argue that some visual contact must exist between the baptizer and the
meter bar in order for there to be any knowledge at all, but this disqualifies that kind of knowledge
as a priori. Kripke introduces some slight changes in his initial position that are, according to
him, enough to deal with both objections. As I shall argue, Kripke’s replies are both insufficient
and dispensable, for there is an easier way out of both forms of criticism. (I propose an alternative
approach in terms of the illocutionary success of the act of baptism that avoids both sorts of
objections.)
In this paper I examine some modal logics in which the modal operator □ can be read as
necessity, or impossibility, or both; in a sense, a noncontingency operator. For an example,
consider classical modal logics, those logics closed under the rule of inference α ↔ β / □α ↔ □
β. Usually the □ operator represents necessity, but it also can be read as possibility,
impossibility, contingency, non-necessity, even negation, and the rule still preserves validity. On
the other hand, in a a rule like α → β / □β → □α, □ can no longer be read as necessity, or
possibility – but it makes sense to read it as impossibility or negation: if α implies β, and β is
impossible, or false, so is α. I am interested in determining what conditions must be required,
both on neighbourhood frames and relational frames, to force one of these different readings, and
under what conditions the readings still remain neutral. In this paper I will consider only the
necessity/impossibility readings, extending some previous results. Several conditions on frames,
of different strengths, are identified, as well as the modal formulas corresponding to them. I also
consider several logics obtained by adding one or more of these formulas as axioms, and prove
soundness, completeness and decidability theorems for each of them.
COMUNICAÇÕES
Em Events and Semantic architecture (OUP 2005), Paul Pietroski procura demonstrar a
viabilidade do Conjuntismo, i.e. a tese de que as relações predicado-argumento são mais
adequadamente descritas em termos de conjunção do que aplicação funcional. O autor toma
como ponto de partida a análise de eventos (neo)davidsoniana, responsável pela formalização de
um série de padrões de acarretamento, como a relação exibida entre a sentença (1) e (2):
(1) Brutus esfaqueou César bruscamente com uma adaga vermelha.
(2) Brutus esfaqueou César suavemente com uma adaga azul.
Vemos claramente que (1) acarreta as sentenças (3-5). Além disso, (2) implica as sentenças (6-
8):
(3) Brutus esfaqueou Cesar com uma adaga vermelha.
(4) Brutus esfaqueou Cesar bruscamente.
(5) Brutus esfaqueou Cesar.
(6) Brutus esfaqueou Cesar suavemente.
(7) Brutus esfaqueou Cesar com uma adaga azul.
Vemos ainda que a conjunção entre (1)-(2) não acarreta nem (8) ou (9):
(8) Brutus esfaqueou César suavemente com uma adaga vermelha.
(9) Brutus esfaqueou César bruscamente com uma adaga azul.
Existe, portanto, uma rede de acarretamento. Esse padrão é geralmente explicado atribuindo às
sentenças (1)-(2) formas lógicas (1a) e (2a):
(2a) ∃e{Esfaqueou(e, Brutus, Cesar) & ∃x[Azul(x) & Adaga(x) & Instrumento(e,
x)] & Suavemente(e)}
The price for naturalising morality is to give up certainty, but not normativity nor the objectivity
of its content. Once one takes objectivity and norm for granted, certainty might be a fair price to
pay. Moreover, it might be a price we should be willing to pay in order to achieve a better
understanding of the social and political process of choosing and enforcing values and norms. I
have been arguing in favour to explain normativity in mere immanent terms through an analysis
of the will. The point here is that the basis upon which normativity has been raised and
maintained among humans is a “will” oriented to the will of others, a “will” that everyone wants
to be dominant, a “will” that wants to be the will of other wills, but which is also reflexive and
therefore concerns everyone. In the first part of the talk, I will return to this point to schematise
the volitive structure of norm objectiveness. This structure is, nevertheless, value-neutral. What
shall one will? Does anything one will under the volitive structure of normativity work as a good
moral norm?
The problem of the content of the norm is the problem of its value. Shall anything go as a good
moral norm, morality would lose its objectivity. Validity is not equal to good, but one may hope a
valid moral norm to be a good one as well. Indeed, the supervenience from the structure over the
content is precisely what principles as universals as the Golden Rule and the Categorical
Imperative try to provide to morality. At the second part of the talk, I shall address this issue. The
answer to the gap between validity and value, however, cannot compromise the research with a
kingdom of ends, freedom or other furniture of a nature beyond nature, without jeopardising its
Old questions, regularly, allow for new answers. How are intentional properties possible in a
physical world? How something (mental or public representations) can be about something else?
Intentionalists believe that intentionality, the relational property of being about something, is
constitutive of mentality. Brentano’s thesis says: 1) the mental is intentional; 2) nothing physical
exhibits that property. This divide is brutal for any philosopher, even with a modest naturalistic
outlook. An interesting suggestion to close the gap is a generalized dispositionalism relative to
mental properties. But not dispositions as they were conceived of in the empiricist tradition. The
dispositional nature of mentalistic vocabulary has been revealed in the most striking way by
Gilbert Ryle. But Carnap and Ryle tried to “analyze away” dispositional terms through reduction
sentences or conditional analysis. Recently, in metaphysics and in the philosophy of science, a
different, realistic approach to dispositional properties has been developed, especially by
Australian philosophers. Dispositional properties have the characteristic of two-sidedness
(Nancy Cartwright): they have a physical base that caused their own manifestations in auspicious
circumstances, like the solubility of a sugar cube which is realized physically in its molecular
structure and causes its dissolution when put in water. That realistic dispositionalism, I believe,
should be extended to include all mental properties, which are also dispositional and realized
physically in the brain. One does not cease to be a conscious being while sleeping deeply (Rudder
Baker) and phenomenal properties can be reconstructed as dispositional (Gozzano). The case for
propositional attitudes is still simpler and well-known. My aim is to show how we can be
intentionalists (by accepting the first part of Brentano’s Thesis) and dispositionalists at the same
time (by accepting that mental states, acts and events have a physical base of realization). In a
nutshell, here is my working hypothesis: intentional phenomena are manifestations of mental
dispositions. That could be a way to close the gap. Intentional properties are everywhere in the
world we inhabit, and that world is not the one described by physics. It is full of artefacts like
institutions and public representations that all depend on intentional phenomena. But a physical
base can, in auspicious circumstances, cause a manifestation which is about something else. That
is the program. Something has to be said about repertoire of concepts and knowledge of meanings
in that context, because public representations instantiate semantic properties which also are
intentional, and many of our mental states have their content specified by the use of a sentence
belonging to a public language.
Pascal’s Wager is a pragmatic argument for the existence of God. In its most standard form,
Pascal’s argument leads us to conclude that rationality requires one to Wager for the existence of
God because this is the option that maximizes the expected utility. If you bet for the existence of
God and it turns out that God exists, then you get an infinite reward; otherwise, if you bet against
the existence of God when he exists, then you get either a negative infinite or finite utility value.
You are only finitely rewarded for either betting for or against God when he doesn’t exist. Given
that your subjective probability for God’s existence is some positive real-valued number, your
expected utility of betting for God will be infinite, making it much greater than your expected
utility of betting against God, which will be equal to either negative infinity or some finite number.
Thus, assuming that such a decision matrix is right, the principle of expected utility maximization
requires you to bet for God. As attractive as this line of reasoning might look, Pascal’s Wager
depends upon various crucial assumptions. In particular, it is widely assumed that you should
assign a fixed, point-valued subjective probability to the proposition that God exists. But what
happens to this argument if we drop this condition, allowing agents to have imprecise
probabilities instead? A typical way of representing an agent’s entire doxastic state is by appealing
to a set of probability functions rather than just by a single probability distribution. In what
follows, I will explore Pascal’s argument in the context of imprecise decision theory. More
specifically, I will assess the plausibility of different principles of rationality when it comes to a
decision matrix in which it is rationally permissible to have a vague or imprecise subjective
probability in God’s existence. Also, philosophers have challenged Pascal’s argument in a variety
of ways. I will examine whether the imprecise version of this argument stands or not on its own
feet in light of the so-called many God objection, which claims that you are no longer rationally
required to bet for the traditional God of theism whenever one introduces other deities to the
decision matrix.
Joseph Raz, em sua obra Engaging Reason, propõe que, em visões básicas sobre a natureza,
fundamentos e elementos da Ação Humana, podemos nos posicionar, basicamente, de duas
Guidance control ensures freedom and moral responsibility over the agent’s action. It requires
that the agent’s own mechanisms issue in the action, and that the mechanism is moderately
reason-responsive, according to Fischer and Ravizza. The former requirement preempts cases of
manipulation, like brain control by an evil neuroscientist. Reason-responsiveness requires that
the agent’s action respond to her reasons to act by exhibiting reason reactivity and reason
receptivity. Reason reactivity is the agent’s capacity to choose according to her reasons to act, as
well as her capacity to act according to her choice. Reason receptivity has an epistemic nature. It
is the agent’s capacity to recognize reasons for action, specifically moral reasons. As stated by
Fischer and Ravizza, the mechanism that issues in the action is strongly reason receptive if (i) it
shows a pattern of reason recognition, and the agent (ii) recognizes the reasons that exist for
acting and how they fit together. Additionally, (iii) the pattern must be understandable by a third
party, who should be able to pick out what constitutes sufficient reason for the agent’s action, and
(iv) reasons must relate to each other in an understandable way to form a comprehensible
pattern. Finally, (v) reasons must be grounded in reality, for an agent might have coherent
reasons that form an understandable pattern, but still not grounded in reality. Interesting
phenomena in recent years show the relevance of the epistemic side of control. Some agents act
in a way that may seem cynical or irrational to most of society. Anti-vaxers, for instance, have
earned the label for being against vaccination. Simplifying, they believe that vaccination is bad
instead of good for health. As a result, they refuse to vaccinate their children, which may allow
for the return of, e.g., measles. Should anti-vaxers be held responsible? On the other hand, in a
traditional society in which medical treatments are intertwined with religion, a healer may feed
The purpose of this presentation is to introduce the notions of Mediational Fields and Dynamic
Situated Senses as a way to identify the logical structure of perceptual experiences. In order to
introduce the previous topics I will draw some lessons from the famous debate between Dreyfus
and McDowell about the structure of experience. I will notice, firstly, that even accepting
McDowell’s characterization of human experience as essentially permeated with conceptuality
Nas últimas décadas, tanto a compreensão linguística quanto a memória episódica e semântica
vêm sendo descritas como processos de simulação mental de experiências (sensório-motoras e
outras). Farei uma breve apresentação desse entendimento, que denomino “paradigma
simulacionista”, a fim de explorar se ele pode ser compatibilizado com a concepção fisicalista não-
cerebralista de mente que vem sendo desenvolvida por Manzotti (2017; 2019). Para tanto,
chamarei a atenção para algumas ideias-chave dessa concepção: (1) Relatividade objeto-objeto:
objetos físicos têm natureza relativa e atual, isto é, suas propriedades se definem em relação a
outros objetos & em relações atuais no espaço-tempo. (2) Identidade experiência-objeto: o corpo
humano é um objeto físico em relação ao qual objetos do entorno (de natureza relativa e atual)
passam a existir. (3) Relatividade do presente: o tempo presente é tal que engloba o conjunto das
relações causais que constituem o mundo físico de um indivíduo. (4) Objetos passados estão
causalmente presentes na memória: desse modo, lembrar é experienciar, isto é, uma espécie de
Nesta comunicação, apresentarei uma proposta acerca da natureza da aritmética segundo a qual
números são ferramentas cognitivas. Esta proposta é massivamente inspirada em pesquisa
científica na área das ciências cognitivas, em especial numa área chamada cognição numérica,
bem como em achados da psicologia do desenvolvimento e da pedagogia matemática. Nesta
proposta, não existem números tais como entendidos tradicionalmente, isto é, como os referentes
dos numerais, os quais são muitas vezes vistos como entidades abstratas existentes fora do espaço
e do tempo. Os numerais, aqui, são nomes vazios, sem referência. A razão para isso está
exatamente nos resultados científicos que inspiram esta proposta, que resumo brevemente a
seguir. Estudos com crianças que estão aprendendo os números revelam que elas começam
decorando a sequência dos numerais como uma sequência de palavras sem significado. Elas vão
aprender o significado dos numerais somente quando dominarem o processo de contagem—o
qual elas também não sabem, inicialmente, para que serve nem o que significa. Nesta etapa
inicial, a criança recita numerais e conta coleções apenas mecanicamente, sem realmente
entender o que está fazendo. Porém, quando a aprendizagem é finalizada, a criança terá
desenvolvido conceitos numéricos que serão associados àqueles numerais inicialmente sem
significado. Estes conceitos, num primeiro momento, não são entendidos como referindo-se a
números (no sentido de objetos externos); eles têm apenas função operacional no uso da
contagem. Mais tarde, porém, quando avançamos na educação matemática e aprendemos
operações tais como adição e multiplicação, os conceitos numéricos são reificados, isto é, passam
a ser vistos como se referissem-se a objetos externos (que chamamos, então, de números). Este
processo de reificação é instrumental: passamos a tratar os números como se fossem objetos reais
porque isso traz vantagens cognitivas, notadamente a facilitação de cálculos mentais. Nesta
descrição científica da aquisição do conceito de número, portanto, não existem números, mas
apenas a ideia de que existem números, que se origina da necessidade de tornar operações
aritméticas mais fáceis em termos cognitivos. Dentro deste quadro, mostro como é possível
entender a objetividade e a verdade de enunciados numéricos nos quais numerais são nomes
vazios. A proposta é que enunciados aritméticos são descrições de procedimentos associados ao
processo de contagem e às operações definidas sobre ele. Por exemplo, nesta proposta a adição é
entendida como o processo de mover-se um determinado número de casas à frente na sequência
dos numerais, a partir de um ponto dado. Assim ‘2+2=4’ é uma sentença verdadeira porque ela
descreve fielmente a operação segundo a qual, partindo-se de ‘2’ e movendo-se duas casas à
frente, chega-se a ‘4’. Dessa forma, as verdades aritméticas são objetivas porque as regras que
regem as operações que elas descrevem são determinísticas e não dão espaço à subjetividade. Por
ser uma abordagem da ontologia da aritmética empiricamente informada, a proposta a ser aqui
apresentada está sujeita à refutação empírica, sendo, portanto, a realização de um projeto
genuinamente naturalista no campo da filosofia da matemática.
Esta comunicação tem o objetivo analisar o problema da relação de Deus com o tempo a partir da
perspectiva do Teísmo Aberto. Não serão apresentados argumentos a favor da existência de Deus,
dado tratar-se justamente de uma análise da adequação das teorias sobre a natureza do tempo ao
Teísmo Aberto que é uma teoria teísta que defende que o conhecimento de Deus sobre o futuro é
condicionado pelas nossas ações. De acordo com essa teoria, Deus não possui conhecimento
sobre nossos atos futuros, isso porque este conhecimento é sobre contingentes que não possuem
valor de verdade; o futuro é indeterminado (aberto). A onisciência divina seria entendida como a
capacidade de conhecer todas as verdades logicamente possíveis de serem conhecidas. Tal
interpretação da onisciência de Deus é a base da solução proposta pelo Teísmo Aberto ao
problema do conflito entre presciência divina e livre arbítrio.
Expostas as linhas gerais sobre o escopo da análise a ser feita serão apresentadas as hipóteses de
temporalidade ou atemporalidade de Deus. Deus é temporal sse ele está contido no tempo e
participa da experiência da passagem do tempo; Deus é atemporal sse está fora do tempo e não
participa da experiência da passagem do tempo.
A atemporalidade de Deus tem origem nas teorias medievais fortemente influenciadas pelo
neoplatonismo, o que é fortemente criticado pelo Teísmo Aberto. De acordo com essa crítica a
ideia de um Deus atemporal surge da necessidade de conjugar a existência estática de Deus, cuja
simplicidade era valorizada acima da complexidade bem como a permanência sobre a mudança
ou movimento pelos pensadores neoplatônicos, um ser perfeito seria então um Deus imóvel,
imutável. O que afirmam os defensores do Teísmo Aberto é que essa interpretação é um tanto
quanto “contaminada” pelas crenças dos pensadores neoplatônicos, amplamente estudados pelos
primeiros pais da igreja que moldaram o pensamento medieval.
Dentre as defesas possíveis à temporalidade de Deus, usarei a que versa sobre sua capacidade de
interagir com o mundo. Tal capacidade, por si só já é indício de temporalidade. Por “ação”
podemos entender, grosso modo, a capacidade de alterar estados de coisas ou ideias; por sua vez,
“alterar” algo é passar tal coisa de um estado de existência x em um tempo t1 para um estado de
existência y em t2. Portanto a própria capacidade de agir já implica “agir no tempo”. Por outro
lado, como seria possível a existência no tempo de um ser que teoricamente criou o próprio
tempo? Se falarmos da existência de Deus “antes do tempo”, já estamos falando em termos
temporais. Neste caso ou o próprio tempo é finito e, portanto, não comportaria em si uma
existência infinita; ou o tempo sempre existiu. Neste último caso o que seria o tempo? Algo que
coexistente com Deus, sendo tão eterno quanto ele, ou algo que é parte dele (visão panenteísta).
Por fim, uma última alternativa, a da inexistência do tempo, será apresentada antes da conclusão
que avaliará qual das alternativas apresentadas melhor se adequam à proposta do Teísmo Aberto
e como.
Para Kant somente a ação racional é livre. E na terceira antinomia da razão pura defende que
teríamos liberdade para agir ou de romper com a cadeia causal determinista do mundo se
tivéssemos a capacidade de dar início a um estado de coisas de forma absoluta, independente dos
demais estados de coisas. Ou seja, que haveria como compatibilizar a causalidade determinista
do mundo bruto com a possibilidade da ação livre. Para John Searle os estados mentais de um
agente devem poder ser acerca do que lhes é externo para que a ação possa ser considerada livre.
Para ele, entretanto se tal ação tiver como causa suficiente uma crença ou um desejo, não há como
se afirmar que esta ação é livre, isto porque, neste caso não há rompimento com a relação de causa
Aos poucos a ciência reúne peças do quebra-cabeça escondido na cabeça humana. Pretendemos
apontar algumas dessas peças sugerir uma teoria que pode combiná-las harmonicamente. A
média do quociente de encefalização (QE) em humanos é a mais elevada já registrada entre os
animais. Grandes primatas são similares em termos do QE e desempenho cognitivo e social,
pautando como referência uma média entre as demais espécies não-humanas. Tanto a capacidade
de processamento de informação de um organismo, como o número máximo de membros
pertencentes aos grupos formados por primatas, são ambos proporcionais à razão entre o volume
do neocórtex e o volume corporal. A variação do QE estimada ao longo da evolução passou por
momentos de estagnação (entre 1.800 – 600 mil anos atrás), alcançando seu auge entre 600 e
200 mil anos (no qual o QE praticamente triplicou). Esse período de rápida mutação é
acompanhado de alterações profundas no comportamento humano. Absolutamente, em nenhum
momento nos desvinculamos completamente de nossa animalidade. Entretanto, algum
diferencial se mostra presente em nossa espécie por conta dos crânios avantajados que protegem
nossos córtices e lobos compartimentados em módulos integrados, da mais alta eficiência. Esse
panorama introdutório justifica levantarmos a questão: o aumento do número de neurônios afeta
a forma como o processamento da informação se dá? Com a intenção de contribuir com os
debates iniciados na virada linguística, propomos contrastar as interpretações reducionistas e
1. Introduction
Taste disagreements have been one of the topics that have received much attention in the
philosophy of language in the last fifteen years. These disagreements have been characterized
using examples like the following:
David: Licorice is tasty.
Layla: That’s not true. Licorice is not tasty.
Although widespread, this characterization –henceforth the standard characterization, or TSC for
short– is not adequate because taste disagreements are often complex conversations that rarely
fit a statement-response pattern. This work defends that typical two-turn examples to represent
taste disagreements is a misguided strategy that has helped to consolidate two false ideas about
taste disagreements: the myth of simplicity and the myth of uniformity. The former refers to an
oversimplification of taste disagreements; the latter to the idea that taste disagreements are
expressed mostly in the same way, using the same expressions. Against these two ideas, I argue
that taste disagreements are larger and more complex than one might expect given TSC, and that
they are expressed in a variety of ways, using not only typical explicit marks of disagreement such
as ‘no’, ‘I disagree’, or ‘that’s not true’, but other linguistic constructions.
2. Two Sources of Motivation
Taste disagreements are complex conversations that rarely fit a statement-response pattern. The
sources of motivation for defending this idea come from two different fields: philosophy of
O objetivo desta apresentação é expor a base filosófica por trás das propostas de Wittgenstein ao
tratar das noções de possibilidade e necessidade. Para tanto, procurarei contrastar os escritos do
período final com o que se lê no Tratactus sobre possibilidade e necessidade, prestando atenção
ao seu contexto de origem e considerando uma distinção entre possibilidade física/possibilidade
combinatória para analisar quais são as implicações modais da mudança radical na concepção de
linguagem. A questão que pretendo enfrentar aqui é: o essencial às noções de possibilidade e
In this talk, we want to relate two trending topics in contemporary epistemology: the discussion
of group knowledge and the discussion of knowledge-first approach. In social epistemology of
group knowledge no one has yet seriously applied and developed Williamson’s theory of
knowledge-first approach. For example, explanations for group knowledge, as presented by
Tuomela (2004), Corlett (2007), Gilbert (2014), and Lackey (2020), assume that knowledge is
analysed in terms of more basic concepts, such as group belief, group justification, and so on.
However, if Williamson’s theory is correct, these are not good explanations for understanding
group knowledge. Thus, we want to analyze what consequences Williamson’s theory has for social
epistemology, namely for an understanding of group knowledge.
Starting with the first-knowledge approach, Williamson (2000, 33) argues that “the concept
knows cannot be analysed into more basic concepts”. Knowledge is prior to other epistemic kinds,
Neste trabalho, temos por objetivo analisar alguns dos desafios atuais colocados pelas
Tecnologias da Informação e Comunicação, (TICs), quanto às questões de privacidade e
autonomia no que concerne ao controle de dados pessoais. Em seu tratado Sobre a liberdade,
John Stuart Mill já se preocupava com a questão da limitação do poder que a sociedade exerce
sobre o indivíduo. Não basta a proteção contra o poder do governo, mas importa ainda o amparo
contra a “tirania da opinião e do sentimento dominantes”. Inspiradas em Mill, argumentamos
que há que se proteger a independência individual em face das ideias e práticas, impositiva e por
vezes irracionais, da opinião coletiva. A dificuldade estaria em se identificar na prática o limite
entre a autonomia da pessoa e o controle social. Tal dificuldade não só persiste como se mostra
premente na Sociedade da Informação vigente. Nesse contexto, dados pessoais referem-se àquilo
que identifica ou que pode identificar uma pessoa, como dados de consumo, dados médicos,
dados de crédito, dados de navegação, dados de geolocalização e assim por diante. Nessa
Inspirada pela distinção feita por Kripke entre os conceitos de analiticidade e a aprioricidade
através do conceito auxiliar de necessidade [10], a tese central desse trabalho é que tem mais tipos
de atitudes políticas do que esperado; a falta de algumas delas no espectro político vem de uma
confusão profunda entre os conceitos de direita e de conservadorismo. Além disso, a separação
entre dois tipos de ordem –econômico e moral, justificará as atitudes mais originais de
‘conservadorismo de esquerda’ [7] e de ‘progressismo de direita’ [3] enquanto explicará as noções
ambíguas de centros e extremos políticos. Em conclusão, uma comparação entre valores políticos
e valores lógicos –verdade e falsidade, permitirá uma explicação das atitudes ambivalentes
‘esquerda e direita’ e ‘nem esquerda nem direita’ ora como casos de confusões conceituais ora
como um colapso do discurso político no ‘extremo-centro’ [5].
A palestra consistirá em cinco seções. A Introdução lança a questão com uma análise conceitual
de dizer e fazer, antecipando o método central e colocando algumas questões gerais sobre a atual
crise da representação política. A Seção 1 trata da oposição política entre esquerda e direita e sua
relevância contestada [2,11]. A Seção 2 inclui o método central que levará à conclusão final, ou
seja, a suposição de uma analogia estrutural entre três famílias conceituais diferentes: os juízos
de Kant [9], os desejos de Epicuro, e as atitudes políticas. A Seção 3 descreve o conteúdo de
atitudes políticas como uma combinação de dois conceitos principais: direita (ou esquerda), e
conservadorismo (ou progressismo); depois questiona sua relevância na maneira de Quine [12],
tanto no que diz respeito à oposição idealizada entre esquerda e direita quanto ao significado
confuso do progressivismo. A Seção 4 propõe uma comparação adicional entre dois tipos de
oposição: os valores políticos esquerda-direita e os valores lógicos verdade-falsidade [13],
alegando que as dificuldades relacionadas a estes últimos se aplicam igualmente aos primeiros.
A Seção 5 conclui a análise da oposição entre esquerda e direita, tratando-a como uma hipótese
conveniente: assim como os conceitos das cores [6], esses conceitos duais são necessários e os
entendemos como diretrizes, apesar da falta de definição clara destes [1]. A Conclusão proporá
um relato reducionista das supostas duas atitudes alternativas na política, a saber: nem-
esquerda-nem~direita e esquerda-e-direita [1]. O primeiro é reduzido a uma forma disfarçada de
centro-direita e centro-esquerda, enquanto o segundo resulta de uma confusão conceitual entre
direito e conservadorismo. Finalmente, uma posição alternativa é enfatizada e considerada como
o verdadeiro culpado da crise da representação política: o ‘centro extremo’, que é uma contraparte
política da modalidade iterada de ‘contingência necessária’ e leva ao niilismo semântico.
At least since the seminal The Critique of Pure Reason by Immanuel Kant (1787), the idea that
the two triads a priori, analyticity and necessity, on the one hand, and a posteriori, synthetic and
contingent, on the other hand, always go together has seriously been put in check. More recently,
Kripke's Naming and Necessity (1980), arguing in a completely different direction, has also been
decisive on this regard. Based on Kripke's work, it seems relatively safe to say that since the
publication of the idea of the necessary a posteriori and many philosophers have been persuaded
by the idea that, despite eventual “pre-theoretical intuitions” may suggest, the triad mentioned
above works somewhat independently. This talk partially challenges this \new" way of looking at
things. More precisely, it is argued that while the notion of metaphysical modality related to
I present a metaphilosophical discussion of the recent debate between the two major views of
perceptual phenomenology – disjunctivist naïve realism and the common factor view –, which is
the contemporary heir to the older philosophical debate about whether perception is direct. The
present investigation is sparked by the fact that the debate has been characterized by little to no
progress towards consensus, where this is not to be explained in terms of any shortage of open
and subtle argumentation. I propose we ask ourselves why argumentation has been failing in this
way. Indeed, we should address the question: could one of the two major views of perception be
shown to be better supported than its rival by the rest of what we know about the world? I defend
a negative answer: no known phenomenon, and no phenomenon we can justifiably expect to come
to know, represents evidence, empirical or a priori, for either of the major views of perceptual
phenomenology. That conclusion is not based on anything like a pessimistic induction, but rather
on the identification of a feature of the debate which ensures that such arguments won’t work.
That feature is what I call its encapsulation: the major theories disagree exclusively about the
27. FEARFUL OBJECT SEEING AND THE AFFECTIVE PENETRATION OF
PERCEPTION
Dr. Felipe Nogueira de Carvalho
PPGFil – UFMG
felipencarvalho@gmail.com
The talk will proceed as follows: a. What is a only-trope metaphysics? b. Nominalist semantics is
presented; c. Only-trope nominalist semantics: formal conditions and problems; d.
Inconsistencies of Only-trope theory are presented.
References
Em seu artigo “On a confusion about a function of consciousness”, Ned Block propõe que o termo
consciência conflita ideias que deveriam ser separadas. Ele então cunha dois conceitos: a
consciência fenomenológica (P-consciousness) e a consciência de acesso (A-consciousness). A
primeira diz respeito à experiência de primeira pessoa, o “what is it like” de Thomas Nagel,
enquanto a segunda abrange o aspecto funcional da consciência, isto é, o papel que ela tem dentro
da cognição. A partir desta distinção, Block recorre ao experimento realizado por George Sperling
em 1961 para argumentar de que a consciência fenomenológica transborda as capacidades da
consciência de acesso, isto é, que a consciência de acesso é menos rica e tem menor capacidade
de lidar com os inputs da cena distal. Esta posição ficou conhecida como Phenomenological
Overflow e seus críticos como No-Overflow. O ponto central do debate gira em torno da maneira
como o input é processado pelos módulos de memória. O experimento de Sperling propõe que
antes da memória de trabalho existe um módulo de alta capacidade de armazenamento, mas
baixa capacidade temporal, o qual ele nomeou de memória icônica. Os resultados do experimento
indicam que os voluntários eram capazes de armazenar todas as letras que compunham uma
grade, mas somente reportar em média quatro delas. Block defende então que já na memória
icônica havia consciência fenomenológica, sendo a consciência de acesso relacionada com a
memória de trabalho. Entretanto, essa conclusão de Block foi sujeita à muitas críticas. Primeiro,
os resultados não parecem indicar necessariamente que as representações na memória icônica
Biological complexity is hard to formalise. Is the E. coli metabolism less complex than the
structure of an eye? In what degree, and why?
The question around biological complexity includes three different issues: (1) its definition, (2)
its measurement and (3) its sources. These issues are independent but interrelated. Many
definitions and measurements of complexity often imply an answer also to the question of its
source. Adaptationist accounts, for example, identifies complexity with phenotypic fitness, whose
increase is caused by selection (Grafen 2007). This classical adaptationist answer has been
increasingly challenged by proposal such as Evo-Devo, the Extended Evolutionary Synthesis and
studies of self-organising systems, to name a few. This last approach, for example, identifies
complexity as a non-selective, emergent trait linked to laws governing open, non-hierarchical and
far-from-equilibrium systems (Kauffman 2000). Other formal attempts to define and measure
complexity (based on thermodynamics, logic, statistics and even fractals, Mitchell 2009) are
inapplicable even to the simplest biological entities.
To overcome these problems of the classical approaches to biological complexity, we propose a
bi-dimensional form function space model that shifts the focus from complexity tout court to
complexity changes, and measures such changes in terms of form and function,
This model interprets many evolutionary phenomena as a change in morphology and/or in the
functions of an organism (e.g. Mayr 1960, Erwin 2015). As to morphology, evolutionary
phenomena can (Müller 2008) (a) leave an existing trait unaffected; (b) entail a new trait; (c)
create a new module, an arrangement of traits forming a standard building unit; (d) require a
new body plan, a new arrangement of modules. As to function, evolutionary phenomena can: (a)
leave an existing function unaffected (e.g. mimicry in the B. betularia). It’s a ‘do the same in the
same way’ situation; (b) incrementally improve some ability already owned by the organism. We
can refer to this situation as ‘do the same in a new way’ (e.g. new metabolic routes); (c) entail a
completely new function: the motto is here: ‘do something new’ (e.g., the thermoregulation
provided by feathers).
Any evolutionary change can thus be mapped and analysed in our bi-dimensional form-function
space. For example, when neither form nor function change (e.g. B. betularia), complexity has
hardly changed at all. When form is maintained but function changes, we have an exaptation.
Irei defender aqui a tese de que, do ponto vista neural, é impossível distinguirmos memórias
falsas de memórias verdadeiras. Mas, para atingirmos esse objetivo, teremos antes que nos
desfazer de alguns pressupostos amplamente disseminados em filosofia da memória. É
atualmente aceito que somos capazes de evocar memórias, no presente, porque há um traço de
memória que foi gerado pela experiência, no passado. Filósofos defendem que (1) para cada
memória há um traço, cujo armazenamento no cérebro se dá de forma individual e isolada, traço
esse que fica disponível até a evocação. Além disso, aceita-se o pressuposto de que (2) a memória
serve para preservar informações, e que, salvo casos de esquecimento, iremos evocar essas
informações do modo como elas foram inicialmente consolidadas. Outro pressuposto comum em
filosofia da memória diz que (3) memórias falsas são o resultado da ausência de traços, pois elas
não correspondem a nenhum evento que o sujeito vivenciou, logo, ela não pode ser o resultado
da experiência. Entretanto, o atual paradigma das ciências cognitivas compreende a memória
como um processo construtivo, ao invés de reprodutivo, pois foi descoberto que os traços são
codificados enquanto um padrão de ativação neural no grupo de neurônios que foi modificado
durante a experiência. Isso nos leva à negação de (1), pois as populações neuronais que estavam
Pust (2014) definiu o racionalismo moderado como a tese de que ter uma intuição racional a favor
de certa proposição seria ter justificação prima facie para crer nessa proposição. Partindo dessa
definição, argumentou contra as defesas de duas teses: a) a de que evidências empíricas são
necessárias para justificar a aceitação do racionalismo moderado e b) a de que evidências
empíricas seriam suficientes para persuadir um oponente empirista a aceitar o racionalismo
moderado. Tais defesas consistem em alegar que evidências empíricas são necessárias para
sabermos que a ocorrência de alguma crença de fato se baseia em intuição racional; em alegar
que evidências empíricas são necessárias para justificar a tese de que uma fonte de evidência
putativa é confiável e, por fim, em alegar que evidências empíricas seriam dialeticamente
suficientes para fazer racionalistas vencerem a disputa tradicional com empiristas. Pust
argumentou contra essas três defesas. Neste trabalho, objetaremos aos argumentos de Pust e
concluiremos que as teses a e b, bem como algumas de suas defesas, permanecem plausíveis.
Uma das características centrais e mais atrativas do contratualismo moral é o fato de que a teoria
é anti-agregativa, isto é, não faz apelo às reivindicações de grupos de indivíduos, mas examina
como ações afetam cada indivíduo. Isso permite que contratualistas evitem vindicar casos nos
quais uma minoria é altamente prejudicada para trazer uma quantidade pouco significativa (do
ponto de vista individual) de bem para uma maioria — o que não é o caso para teorias
consequencialistas comprometidas com a agregação. No entanto, existe um problema em uma
teoria sem agregação. Teorias anti-agregativas têm dificuldade em lidar com casos que envolvem
Speech act theories of fiction propose that in writing a fictional work the author is performing a
sui generis speech act dubbed fiction-making. Fictionmaking is often characterized by reflexive
intentions similar to Grice’s take on assertions, modulo a substitution of belief for make-belief.
Recently, it has been widely accepted that fictions are composed by a patchwork of at least two
speech acts: besides acts of fiction-making, the author of a work of fiction performs regular acts
of assertion. I will argue against patchwork approaches. First, I will present a speech act theory
of fiction. Patchwork approaches are usually motivated by two observations: some utterances in
fictional works check all the requisites to be considered assertions, so they should be considered
as such; and fictional works convey, in addition to information about the fictional world,
information about the actual world, and in conveying information about the real world the author
asserts his beliefs. I will argue that while it is plausible to accept the view that there are sincere
assertions in fictional works, accepting insincere assertions – i.e. lies – in such environments
clashes with widely accepted intuitions about fictionality. The lack of a secure access to the
author’s beliefs, including what the author takes to be true information about the actual world,
along with the conclusion that there are no insincere assertions in fiction, results in an
inconsistent attribution of overt intentions to the author’s utterances – an important attribute to
the recognition of a speech act. In most cases, the reader cannot be sure if the author is, in fact,
asserting – that is, prescribing belief –, or fiction-making – i.e. prescribing make-belief. In place,
I will propose an uniform speech act theory of fiction, where all utterances are taken to be acts of
fiction-making.
Keywords: fiction; assertion; lying; belief; make-belief.
Appositive concept designators such as “the concept horse” crucially differ from other expressions
of the form “the ø a”. Most expressions of this form function in the following way (see Schnieder
2006): As part of “the property wisdom”, “wisdom”
a) refers to the same thing as the complex expression, namely the property wisdom,
This is an attempt to expand the dialog between Philosophy of Cognitive Sciences and
Epistemology. Our proposal, being inspire by the enactive-ecological perspective and
Wittgenstein, is that we can see how all knowledge (including the propositional variety) hinges
on abilities. We start from Philosophy of Cognitive Science, with a certain understanding of
cognition. From that and epistemological considerations about normativity we look at the
discussion found in readings and appropriations of Wittgenstein thought, the discussion about
the nature of the so-called hinge commitments. I argue that a certain reading of the hinge
commitments can answer what I call the enactive challenge to the characterization of
propositional perceptual knowledge. First, I explain in more detail the enactive perspective, our
starting point. Second, we present the challenge as some difficulties (not exclusive to the enactive
perspective) of explaining the normativity and rationality of perception. Summarizing, the
enactive challenge is how to define the relation between perception and proposition perceptual
knowledge. The characterization of the relation is made difficult once one must keep the
assumption that perception provide reasons for the propositional attitudes in question and at the
same time assume a discontinuity between perceiving and its know-that. The discontinuity is that
the epistemic and doxastic mental states are representational and the perception is not (as the
enactive perspective claims). That brings the issue of explaining how this non-representational
state is normative of or give legitimacy to representational states of knowledge and belief that p.
It seems problematic to any theory of cognition the claim that questions of normativity of
epistemic and doxastic states cannot be answer in naturalistic framework, even in a non-reductive
one. That would amount to say that a very important part of cognition, our knowledgeable
engagement with the word, could not be explain with the tools of Cognitive Science and
Philosophy of Cognitive Science. We think that this is not the case. We try to show how
considerations about the structure of rational evaluation made first by Wittgenstein in On
Certainty can be of assistance with our conceptual puzzlement about the relation between
perception and perceptual knowledge. Connections between the challenge from the enactive
perspective and hinge epistemology will become clearer when I applied Moyal-Sharrock
interpretation of hinges to a characterization of the “arational” or animal basis that explain the
normativity of perception in relation to perceptual knowledge. After exploring the enactive view
of perception that understands perception in terms of skills and a reading of hinge commitments
as the one Moyal-Sharrock attribute to Wittgenstein, we argue in favor of the following:
perceptual skills hinge on the acquisition of perceptual language, in in doing so perceptual
thought, belief and propositional knowledge. They are the unreflective ways of acting that allows
for the specification of invariant information.
Keywords: enactivism; perceptual knowledge and skills; propositional attitudes; hinges
commitments.
Saul Kripke in his Naming and Necessity proposed convincing cases of necessary truths that can
only be knowable a posteriori, as well as cases of contingent truths that can be knowable a priori.
However, the plausibility of this cases has been contested by some philosophers who used a two-
dimensional framework to represent the meaning and reference of a several linguistic expressions
e.g., proper names and natural kind terms. The most influential two-dimensional approach is the
Epistemic Two-Dimensionalism (E2-D) developed by David Chalmers (2006) and Frank Jackson
(2004). Accordingly to this approach, propositions have truth-values evaluated in two different
ways: the first one is relative to consider a possible world “as actual”, and corresponds to what
Chalmers calls “primary intension” and Jackson calls “A-intension”; the second one, is relative to
consider a possible world “as counterfactual”, and correspond what Chalmers calls “secondary
intension” and Jackson calls “C-intension”. This analysis shows that there are no necessary a
posteriori or contingent a priori truths: in Chalmers vocabulary, necessary a posteriori
propositions are propositions that have a necessary secondary intension but contingent primary
intension; similarly, contingent a priori propositions are propositions that have a necessary
primary intension but contingent secondary intension. My goal in this talk is to show with more
details the strategy and philosophical motivations assumed by E2-D against the kripkean cases
as well as the state of the art regarding the main arguments for and against E2-D.
Keywords: Epistemic Two-Dimensionalism. Necessary A Posteriori. Contingent A Priori.
References:
CHALMERS, D. “The Foundations of Two-Dimensional Semantics”. In: GARCIA CARPINTERO,
M. and MACIÀ, J. (eds.). Two-Dimensional Semantics: Foundations and Applications. Oxford
University Press. pp. 55-140, 2006.
JACKSON, F. “Why We Need A-Intensions”. Philosophical Studies 118, 1/2 (2004), 257-277.
Fodor (1987) sustenta que a Teoria Representacional da Mente (TRM) endossa o realismo
intencional do senso comum e, simultaneamente, endossa uma tese sobre os “mecanismos
sintáticos” que garantem eficiência causal às atitudes proposicionais. Assim, a TRM pode
conservar a perspectiva do senso comum sobre a mente e propor uma solução materialista ao
problema da causação mental. Stich (1983) nega que a TRM possa endossar o realismo
intencional simultaneamente ao endosso de uma tese sobre a eficiência causal dos estados
mentais exclusivamente em termos de mecanismos sintáticos de estados mentais. Neste artigo,
apresento as principais características da TRM que suscitam esta objeção e sugiro um diagnóstico
sobre onde Stich se equivoca. Enfim, argumento que uma vez retificado tal equívoco, não é
Meu objetivo neste trabalho é avaliar se raciocínios cotidianos são ou não permeados por
processos lógicos. A Teoria da Lógica Mental (TLM) sustenta que "raciocinar" é meramente uma
questão de aplicação de regras de inferência. Contudo, a TLM vem sendo constantemente
desafiada por programas teóricos alternativos, em especial pela Teoria dos Modelos Mentais de
Johnson-Laird. Uma vez que a lógica é tradicionalmente reconhecida como a "ciência" dos
argumentos válidos, enquanto as teorias de prova são modos distintos de abordar a validade, o
Minha proposta visa explorar o conceito de ficção enquanto um elemento presente não apenas
nos âmbitos teóricos da ciência e da Filosofia (através das narrativas que compõem os chamados
experimentos mentais “impossíveis”), mas também no campo da normatividade. Partindo das
concepções de Thomasson (1999), irei avaliar dois tipos de ficção presentes no âmbito normativo
jurídico, a saber, (1) as ficções normativas “ordinárias” – ou fictio legis, deliberadamente criadas
para lidar com situações onde não há jurisdição – e (2) as ficções normativas científicas do
Direito, que lidam com questões teóricas envolvendo a fundamentação do ordenamento.
Na filosofia, a presença do elemento ficcional é notada através, dentre outros casos, dos
experimentos mentais, onde somos convidados a “fazer como se” um estado de coisas – muitas
vezes, impossível – fosse o caso, para então derivar as consequências relevantes. Nas ciências
naturais, Hans Vaihinger (1924) defendeu as ficções como elementos fundamentais que
pautariam nosso conhecimento, na medida em que teríamos de “fazer como se” a realidade
empírica fosse correspondente a nossos modelos de conhecimento. No âmbito da lógica, filósofos
contemporâneos como David Lewis (1978) propõem o uso de operadores como o “according to
the fiction” na semântica do tratamento lógico dos objetos não-existentes.
Partindo deste contexto, meu propósito é o de avaliar o uso da ficção em um âmbito específico: o
da normatividade. No Direito, já na tradição do Direito Romano, observa-se a existência das fictio
legis como a que determina que, em estado de guerra, um soldado morto em cativeiro deve ser
considerado juridicamente “como se” tivesse sido morto dentro do território romano e enquanto
cidadão livre, para a garantia de direitos de sucessão. Já no âmbito científico do estudo do Direito
e de suas normas, temos a noção de norma fundamental, que serve de fundamento aos sistemas
jurídicos, dando-lhes a unidade que garante a própria possibilidade de seu estudo. Tal norma não
existe de fato dentro do sistema, e a ideia da necessidade de sua pressuposição enquanto ficção
foi introduzida pelo teórico do Direito Hans Kelsen (1979). Amplamente inspirada pela
“Philosophie des Als Ob” de Vaihinger, a abordagem kelseniana continua sendo amplamente
controversa até a atualidade.
In a series of recent publications, Kit Fine has developed a new approach to semantics based on
the idea of states, or truthmakers, exactly verifying statements. Distinctive of this relation, among
others, is the feature that a state is supposed to be wholly relevant to the proposition it exactly
verifies. Thus, while the state of Socrates being Greek exactly verifies the proposition that
Socrates is Greek, the fusion of the state of Socrates being Greek and it being raining now fails to
do so, since it has a part which plays no role in rendering the proposition in question true. While
truthmaker semantics has already been applied to a variety of topics its application to modalities
is still an open issue. In this talk, I first present a truthmaker semantics for propositional modal
logic S5 which is provably sound and complete. Central to the clauses for ‘□’ and ‘♢’ is the relation
of a state being an alternative to another state. As with exact verification (and exact falsification),
alternatives are subject to constraints of relevance. Having presented the clauses, I proceed to a
brief discussion of their consequences to the neighbor issue of what the grounds for necessities
are, that is, given a necessary proposition, why is it necessarily the case? Several of these
consequences turn out to match intuitively plausible stances on this question, and keep up with
the main tenets of the logic of grounds, which provides further support for the semantics
proposed.
O presente texto tem como objeto a suspensão do juízo e a ignorância. Sua pretensão é de
apresentar e discutir qual o lugar da ignorância na suspensão do juízo, especificamente enquanto
uma condição necessária para que a suspensão do juízo seja racional e justificada. Para tanto,
primeiro partir-se-á de uma perspectiva geral e neutra de suspensão do juízo, de maneira que
concepções específicas poderão compartilhar as conclusões do texto adaptando-as à suas
singularidades. Em segundo lugar, serão apresentadas e discutidas três diferentes interpretações
do conceito de ignorância proposicional, uma considerada padrão, entendida como ausência de
conhecimento, e duas interpretações alternativas, a primeira entendida como ausência de crença
verdadeira e a segunda como ausência de justificação para crer, considerando suas
O objetivo do presente trabalho é investigar o alcance e limites das novas tecnologias de Big Data
na detecção de relações causais no contexto da pesquisa científica. A questão que direcionará
nossa apresentação pode ser assim formulada: Que critérios estão envolvidos na escolha de
relações (causais e/ou correlacionais) presentes em massiva quantidade de dados disponíveis
para análise científica? Para investigar tal questão, apresentamos conceitos básicos da Estatística
discutindo em que medida esse quadro conceitual é transposto para análise de massiva
quantidade de dados. Em seguida, discutimos duas abordagens contemporâneas, propostas por
Piestch (2013, 2014) e Pearl (2013, 2015, 2018), acerca de métodos para detecção de relações
causais no contexto das técnicas de Big Data. Pietsch (2013) reivindica que as técnicas de análise
de Big Data, aliadas ao raciocínio por indução eliminativa, constituem um critério para o
desvelamento de relações causais. O autor entende que o método por indução eliminativa permite
desvendar quais propriedades são relevantes em um determinado contexto, de acordo com os
parâmetros estabelecidos, a partir de intervenções realizadas no objeto de estudo. Segundo Pearl
(2013), além do estabelecimento de relevância/irrelevância entre variáveis, a detecção de relações
causais exige o desvelamento da direcionalidade entre eventos. O autor entende que o raciocínio
diagramático possibilita a visualização da direcionalidade das variáveis analisadas, por exemplo,
pelo método de intervenção proposto por Pietsch. Assim, Pearl (2013) propõe que a detecção de
relações causais é possível a partir da elaboração e manipulação de diagramas por raciocínio
contrafactual. Entendemos que as duas propostas se complementam, possibilitando nossa
investigação sobre o alcance e limites das novas tecnologias de Big Data na detecção de relações
causais na pesquisa científica. Por fim, faremos um balanço geral das propostas apresentadas,
trazendo exemplos ilustrativos sobre o alcance e limites da aplicação de técnicas de Big Data na
prática científica.
Palavras-chave: Big Data, Estatística, causalidade, correlação.
O objetivo deste trabalho é apresentar como nossa percepção de cores ocorre, isto é, como que
percebemos determinados estímulos luminosos como cor e como categorizamos esses diferentes
estímulos como distintos um do outro (e.g. amarelo é distinto de rosa etc). Os olhos humanos não
filtram todos os comprimentos de onda possíveis, ocasionando uma restrição da percepção de
Metacognitive feelings (m-feelings) are mental phenomena that have started many discussions
on both philosophy and the cognitive sciences. They are primarily understood as subjective
experiences that somehow inform their subject about her mental states. For example, the feeling
of knowing (FOK) happens when the metacognitive system detects that something can be
retrieved in memory and represents that affordance through a specific sensation. Thus, m-
feelings have a dual nature: they have both phenomenal properties, as well as are capable of
indicating something about the subject’s cognitive processes.
While trying to account for these characteristics, philosophers offer different perspectives on the
structure of m-feelings: conceptual theories treat them as phenomenal experiences with an
attached concept of the relevant mental state, while expressive accounts argue that m-feelings
directly point, through their valence and intensity, to a cognitive affordance. Intuitively,
conceptual theories seem more acceptable since they are capable of more clearly distinguishing
between the phenomenal and representational properties of m-feelings: while the former is
understood in terms of bodily sensations and their valence, the latter should be characterized by
O objetivo deste trabalho é analisar criticamente, no contexto das técnicas de análise de Big Data,
o problema levantado por Andreas Mathias (2004) sobre o hiato de responsabilidade
(responsibility gap). O problema pode ser assim formulado: uma vez que sistemas autônomos
artificiais (como os de aprendizagem de máquina utilizados na análise de grandes dados) são
capazes de tomar decisões autonomamente, sem controle humano, mas que geram efeitos sobre
seres humanos, surge um hiato na atribuição de responsabilidade pelas consequências dessas
decisões. Procuraremos mostrar que a noção de agência compartilhada pode auxiliar a superar
tal problema, permitindo atribuir a sistemas artificiais responsabilidade derivada pelas
implicações das decisões tomadas sem controle humano. Tal atribuição parece ser eticamente
legítima uma vez que, como procuraremos mostrar, os sistemas artificiais autônomos fazem parte
de uma rede de agentes naturais e artificiais que, entendemos, devem ser considerados
corresponsáveis pelas consequências dessas decisões.
Walton (1992, 1996, 1999, 2008) showed that many fallacies are incorrect applications of
argument strategies that can also be correctly applied. For him, the appeal to ignorance is among
these strategies. Walton presents two argument schemes as a characterization of the appeal to
ignorance:
(I) It has not been proved that P is true. P is false.
(II) It has not been proved that P is false. P is true.
This paper has two purposes. The first one is to show that some of the arguments Walton
identifies as correct appeals to ignorance are fallacious, whereas the rest are correct but can only
be considered as appeals to ignorance if we adopt a non-traditional understanding of this
category. The second purpose is to show that there are arguments not considered by Walton that
could be correct appeals to ignorance as traditionally conceived.
Concerning the first purpose, I identify two types of argument that, according to Walton, contain
correct arguments from ignorance. I call the first type “pragmatic arguments”, insofar as certain
practical consideration present in the context where such an argument is presented would explain
its correctness. I argue that these arguments are incorrect, insofar as the support the premises
give to the conclusion in a correct argument is understood as implying the evidential support of
the former to the latter (in the sense that, for a rational being that understands the premises, the
conclusion and the inferential relation between them, the premises count as evidence for the
conclusion), and pragmatic arguments give no evidential support for their conclusions.
The second type consist of arguments that are based both on lack of knowledge and on evidence.
As Copi and Cohen (2014:132) notice, such arguments are not appeals to ignorance as
traditionally conceived, since they appeal both to ignorance and knowledge. Walton claims (1999:
370-371) that it is reasonable to class them as appeals to ignorance, insofar as they appeal to
ignorance after all. I argue that this is a bad answer, since it renders all non- deductive arguments
appeals to ignorance. Any non-deductive argument has a - usually implicit- premise that states
that its conclusion is not known to be false, and usually others that count as evidence for the
conclusion.
In the end, Walton conceives the appeal to ignorance not in terms of the support the premises
give to the conclusion, but as a pragmatic category that singles out instances of schemas (I) and
(II) by taking only into account what is explicitly said because of not being salient in context.
In order to provide a new enactive-based approach to cognition that deals with the categorical
gap between lower-order and higher-order cognitive processes, Di Paolo, Cuffaro and De Jaegher
(2018) recently proposed a rich and intricate theory of cognition that conceives bodies as
linguistic beings and rejects the need of mental representation to explain cognition; I call it
linguistic enactivism. This theory provides a framework for the understanding of bodies, social
practices, and language that frames cognition as a three-level domain, namely, the organic
viability, the sensorimotor grasp, and the social interaction, and presents a model of cognition
that starts from participatory sense-making and builds up through seven other dialectic steps
leading to the notion of linguistic agency, which is a key notion for considering reference,
grammar, symbols and other features of language under its perspective. In this talk, I will present
the main points of linguistic enactivism and question how the theory can influence new
developments in research about language. For doing that, I’ll resort, first, to some reflection on
what an understanding of language needs in order to provide a full account of embodied language,
and second, to some examples of contemporary embodied empirical research in neurolinguistics
and social interaction. The empirical research in neurolinguistics is strongly influenced by
embodied cognition, but it relies on the notion of mental representation, while research on social
interaction is considered as an alternative for shifting the paradigm of mental processes as based
on mental representation to the notion of participatory sense-making. In pursuing this line of
investigation, I am at identifying (1) what aspects of current empirical research are influenced by
embodied theses of cognition, and (2) what needs to be considered in empirical research, in order
to avoid the postulation of mental representations. In this talk, I am going to present a work in
progress which has as its the overarching aim to evaluate whether and how linguistic enactivism
can influence empirical research about language.
Dentro do debate sobre as ciências naturais, apesar dos filósofos divergirem em suas análises a
respeito de inúmeros aspectos, parece ser um consenso a concepção geral de que as teorias
científicas (em especial as que são bem-sucedidas) possuem o empirismo como uma de suas
principais características. Porém, um fato que também é bem conhecido e aceito é o de que para
dar conta dos fenômenos muitas vezes os cientistas precisam postular entidades que não são
diretamente acessíveis à observação, o que suscita inúmeras discussões com relação a esse
empirismo. Uma explicação para o modo como os cientistas empregam essas entidades
inobserváveis pode ser encontrada na filosofia da ciência de Willard Quine. Então, buscando
compreender como os cientistas postulam entidades inobserváveis, o presente trabalho irá
apresentar o modo como Quine constrói sua alternativa. Para isso, será feito por meio da análise
de textos do filósofo, a apresentação da discussão que ele faz sobre ontologia e linguagem,
mostrando a dimensão epistemológica que tais questões tratadas por Quine possuem. Em seguida
será apresentado a posição holista adotada por Quine e será explicado como ela, acrescida de um
outro referencial muito importante em sua filosofia: o pragmatismo, é usada para falar a respeito
de teorias científicas e de suas entidades. Por fim, será apresentado como exemplo o episódio da
história da ciência onde, para resolver um grande problema que acometia a teoria mais bem-
sucedida da física, cientistas passaram a se comprometer com a existência de uma entidade
inobservável conhecida como “bóson de Higgs”, isso para que, analisando tal episódio à luz da
alternativa quineana já apresentada anteriormente, possamos compreender melhor como os
cientistas postulam e utilizam entidades inobserváveis em suas teorias.
Palavras-chave: Ciências naturais; Entidades inobserváveis; Holismo.
Intuitivamente, parece plausível afirmar que dada uma proposição (ou sentença) verdadeira,
existe algo que a torna verdadeira. A essa coisa que torna proposições (ou sentenças) verdadeiras,
dá-se o nome de fazedor-de-verdade (truthmaker). David Armstrong, em seu World of State of
Affairs (1997), define a tese do princípio dos fazedores-de-verdade (truthmaker principle) em
sua versão modalizada assim:
In the useful if theoretically misleading terminology of possible worlds, if a certain truthmaker
makes a certain truth true, then there is no alternative world where that truthmaker exists but
the truth is a false proposition.
Timothy Williamson, em seu artigo Truthmakers and the Converse Barcan Formula (1999),
formaliza a afirmação de que necessariamente toda verdade é necessitada pela existência de
algo (TM) como um esquema da linguagem da lógica modal quantificada:
(TM) • (A → ∃x • (x = y → A))
Será importante avaliar alguns aspectos semânticos de (TM), dado que o princípio afirma que
para todo mundo w em que A é verdadeiro, existe um elemento p∈ dom(w) tal que para todo
mundo w*, se p∈ dom(w*), então A é verdadeiro em w*. E de forma semelhante, se A é falso
em w*, então p ∉ dom(w*), e, portanto, dom(w*) ≠ dom(w). Dado isso, podemos afirmar que
a aceitação de (TM) implica a aceitação de domínios variados de quantificação. O objetivo
principal da comunicação será avaliar (TM), sua relação com os domínios variados, e algumas de
suas vantagens e dificuldades.
O argumento do risco indutivo é um dos principais argumentos contrários ao ideal da ciência livre
de valores. Enquanto este defende que não deve haver papel para valores não-epistêmicos na
avaliação de hipóteses científicas, aquele conclui que, ao decidir quão suficiente é determinada
evidência para uma hipótese específica, valores não-epistêmicos devem ser considerados. Dois
consensos se formaram na literatura contemporânea da área de ciência e valores sobre esse
argumento: o (i) consenso de que ele derroga o ideal da ciência livre de valores e (ii) consenso de
que a formulação do argumento dada por Heather Douglas é mais ampla que aquela de Richard
1 The problem
This paper examines whether MacFarlane’s relativist framework (2003, 2005, 2014), originally
built to handle a host of expressions (such as temporal index- icals and epistemic modals, among
other), could be extended to a semantic theory that is both: (1) able to make sense of Gibbardian-
like standoffs (Gibbard, 1981), while (2) still retaining a truth-conditional view of indica- tive
conditionals (thus taking them as expressing propositions), which both Gibbard and Edgington
(1995, 1997) would themselves reject on the face of indicatives being involved on such standoffs.
From a formal standpoint, the technique we employ simply exploits minor tweakings on key
aspects of An- gelika Kratzer’s restrictor view of conditionals (1986), and then fit it back into
MacFarlane’s general picture. We argue in favor of such a view, while still assessing merits and
downfalls it could face, since it largely deviates from standard approaches to conditionals and
natural language semantics in general.
2 Outlining the argument
Nearly all central debates involving the meanings of conditional sentences (in both indicative and
subjunctive moods) are rooted in two simple stands about such expressions. The first one opposes
(1) those theorists who take indicatives and subjunctives to share any region (at various degrees)
of a common semantic core (on one side), from those theorists who don’t take them to be unified
in such a way. The second stand, in its turn, divides (2) those who take that conditional sentences
can be assigned truth-conditional contents and express propositions, and on the other side, those
who don’t take assertions of conditionals to involve truth-conditional contents at all (but rather
involves something else).
Thus for instance, Stalnaker (1975) and Edgington (1995, 1997) will agree on how they stand
about (1); that is, they will both take indicatives and sub- junctives to be semantically unified in
some way. On the other hand, both will largely deviate with respect to question (2) – Stalnaker,
for instance, will claim that conditional sentences can be assigned truth-conditional contents, and
that in fact they go by his similarity based framework, while Edgington (on the opposite side) will
claim that truth-conditions are too strong a con- dition to be holding on plausible everyday uses
(421):107–116, 1997. A. Gibbard. Two recent theories of conditionals. In W. L. (et al.) Harper,
editor, Ifs. Conditionals, Belief, Decision, Chance, and Time, pages 211– 247. D. Reidel Publishing
Company, 1981.
A. Kratzer. Conditionals. Chicago Linguistics Society, 22:1–15, 1986.
D. Lewis. Counterfactuals. Blackwell, 1973.
D. Lewis. Probabilities of conditionals and conditional probabilities. The Philosophical Review,
85(3):297–315, 1976.
O presente trabalho tem por objetivo investigar a chamada abordagem sintática de teorias
científicas do Empirismo Lógico e, de modo específico, analisar a Received View (RV) de Rudolf
Carnap. Historicamente, a Received View de Carnap e do Empirismo Lógico dominou o cenário
da Filosofia da Ciência até meados de 1960 quando, de sua queda, cedeu lugar ao que ficou
conhecido por abordagem semântica de teorias científicas. Enquanto a primeira concebia teorias
como sistemas axiomáticos, os autores proponentes da abordagem semântica consideravam
teorias científicas enquanto famílias de modelos. Não obstante, esses mesmos autores, como, por
exemplo, Suppe (2000) e van Fraassen (2007), contribuíram para a enorme rejeição que o
Empirismo Lógico sofreu ao sustentarem que a abordagem sintática do movimento era
simplesmente a maneira incorreta de se proceder em Filosofia da Ciência. Por conta de tal
rejeição, abordagens sintáticas vinham sendo ignoradas e negligenciadas até poucos anos e, além
disso, foi somente nas últimas décadas que trabalhos passaram a ser dedicados aos autores do
Empirismo Lógico. Tais retomadas, contudo, têm mostrado que a rejeição do movimento
produziu uma série de caricaturas que retratavam erroneamente as construções teóricas e as
motivações de seus membros. Ademais, ficou evidente que, apesar de uma história de acalorado
debate entre abordagens sintáticas e semânticas, não houve, de fato, um embate por parte de seus
proponentes centrais. Assim, a rejeição acabou por produzir e “enterrar” um “espantalho” do
Empirismo Lógico, junto com construções que poderiam ser consideradas atualmente. Desta
forma, propomos analisar, neste trabalho, a abordagem sintática de teorias científicas, isto é, a
Received View, de Rudolf Carnap, em contraste com caracterizações clássicas, como a de Suppe
em The Structure of Scientific Theories (1977). Tal análise é feita apresentando uma concepção
consistente textualmente da RV de Carnap e avaliando até que ponto as críticas, como, por
exemplo, acerca do uso exclusivo da lógica de primeira ordem, procedem. Com esse objetivo em
vistas, começamos com a caracterização feita por Suppe (1977) e passamos à apresentação de
Feigl em “a visão ‘ortodoxa’ de teorias” (2004), que faz um esquema geral da Received View. Com
as linhas gerais estabelecidas, buscamos qualificar e especificar esse esquema com as
considerações de Carnap em diversos textos, valendo-nos, também, da pesquisa de Lutz em
Criteria of Empirical Significance (2012). Assim, com a concepção de Carnap construída, a
contrapomos com a caracterização de Suppe e avaliamos algumas das críticas centrais que a
Received View recebeu, analisando se elas se aplicam somente à caracterização estereotipada ou,
também, a nossa interpretação.
We conceive of ourselves as beings capable of acting in response to normative reasons. Given that
our normative reasons are usually facts about the circumstances of action, this self-conception
entails that we are capable of acting in response to such facts. A common response to this claim
is to try to deflate it. This response is manifested in the view of those who hold that we always
decide to act in light of our beliefs and that talk of acting or deciding to act in light of facts should
be understood as an elliptical way of talking about deciding to act in light of true beliefs. The main
support for this view comes from the argument from error cases, i.e., cases in which the agent
decides to perform action A in light of the consideration that M but her belief that M turns out to
be false. In one such case the agent cannot be said to have decided to act in light of the fact that
M, since M is not the case. The right thing to say in this case is that the agent decided to act in
light of her belief that M. But the activities of deciding in light of a fact and deciding in light of a
belief in an error case can be indistinguishable from the standpoint of the agent engaged in
practical thinking. So, the argument goes, we should conclude that the agent is mobilizing exactly
the same capacities for practical thinking in both cases. Given that in the error case she is
mobilizing her capacity to decide how to act in light of her beliefs, the same must be true in the
case in which she acts in response to a perceived fact. What should follow is that acting in light of
a fact is simply a way in which to act in light a belief, namely, it is to act in light of a true belief.
The goal of this paper is to argue against this deflationary view. I start by offering a
counterexample to the view that to act in light of fact is to act in light of a true belief. One can act
in light of a true belief but fail to act in light of a fact when one decides to act in light of a
consideration that is true but which one does not know to be true. This shows that the argument
from error cases is unsound. The question is then where the argument goes wrong. I show that
Dancy’s claim that reason explanations are not factive fails to address the argument as I
reconstructed it. I then argue that the argument can be rejected if we adopt a disjunctive view of
acting in light of a consideration (according to which acting in light of a fact and acting in light
of a belief are two subjectively indistinguishable but distinct ways of acting in light of a
consideration) and proceed to motivate this view.
Many philosophers have highlighted the importance of mental images in our mental lives, and it
is widely accepted that most of us frequently experience mental images of all sensory modalities.
Aristotle, for instance, claimed that thought requires images (De anima). Peter Carruthers (2015)
has argued for a similarly strong view, according to which conscious thinking is always sensory
based, so mental images of all sensory modalities are the vehicles of conscious thinking. He does
accept that we have amodal attitudes, like goals, decision, and some forms of judgment. But
precisely because they are amodal, they cannot, in his view, be conscious. I will argue that some
Dispositional realists hold that the laws of nature are somehow related to the existence of
dispositional properties. According to this account, the explanatory power of scientific laws is
grounded on the existence of irreducible dispositions. However, dispositional realism faces some
drawbacks, especially because of its robust metaphysical commitments. In this talk, my aim is to
approach the debate on laws and dispositions by the point of view of voluntarist epistemology, as
conceived mainly by Chakravartty (2017). According to him, voluntarist epistemology is
understood as the statement that certain beliefs and actions that play a relevant role in the
evaluation of arguments are chosen by the subject, from the moment that she adopts a certain
stance. A stance can be conceived of as a set of epistemic values that guide the assessment of the
degree of epistemic risk that one is willing to assume in defense of a certain ontological theory.
In the case of discussions about the existence of laws of nature, the arguments found in the
literature tend to fit in one of the following three stances: (i) deflationary stance; (ii) empiricist
stance; (iii) metaphysical stance. According to Chakravartty's classification, while the
deflationary posture does not take discussions in ontology seriously, empiricist and metaphysical
postures are distinguished based on the role given to the demands for explanation. On the one
hand, the empiricist stance rejects all demands for explanations of phenomena in terms of
realities underlying the observable. Thus, the empiricist remains agnostic about the existence of
entities and processes that can provide answers to this type of problem. Hence, realistic
interpretations of laws and dispositions will not be endorsed, since these are based on the alleged
explanatory power of such entities. On the other hand, the metaphysical stance undertakes the
theorization about the unobservable entities and, thus, defends the ontological commitment to
certain entities and processes based on their explanatory power. The distinction between
metaphysical and empiricist stances has great explanatory advantages, such as the clarification
of the epistemic assumptions of certain theories in scientific ontology. Still, this distinction
reveals the difficulty of making sense of the debates between empiricist and metaphysical stances,
so that much of the discussion on laws and dispositions becomes preaching to the converted. In
order to solve this problem, my aim is to outline the general bases of what we could call a
pragmatic stance, not reducible to the three stances formulated by Chakravartty. The pragmatic
Crenças são estados mentais que representam o modo como as coisas são. Na medida em que se
adequam ao modo de ser das coisas, crenças são verdadeiras. De outra forma, são falsas. Tal
adequação diz respeito às proposições que figuram como conteúdo representacional de crenças,
seu conteúdo proposicional. Dito de outro modo, o conteúdo proposicional de uma crença deve
corresponder a um fato para que a crença seja verdadeira. Além de se distinguirem de outros
estados mentais por sua natureza representacional, crenças têm grande relevância prática.
Figuram como elementos de determinação do curso de nossas ações, apresentando seus meios de
execução, i.e., indicando as possíveis formas de satisfação de nossos desejos.
Uma concepção adequada do que sejam crenças deve reconhecer tanto a natureza
representacional quanto a relevância prática de crenças. Tendo em vista ambos os aspectos,
abordarei em minha comunicação concepções que se revelam tanto como pragmatistas quanto
como representacionalistas. Partirei, inicialmente, das concepções de crenças oferecidas por
Charles Peirce (1986 [1877] & 1986 [1878]), Frank Ramsey (1990 [1926], 1990 [1927], & 1990
[1929]) e Robert Stalnaker (1984), as quais reconhecem a natureza representacional de crenças e
destacam sua relevância prática. Investigarei, em seguida, a hipótese, aventada por Cheryl Misak
em suas interpretações de Peirce (Misak 2004 [1991]) e Ramsey (Misak 2017), de que crenças
sejam constituídas por conjuntos de expectativas.
Expectativas dizem respeito a experiências futuras, e se revelam, sobretudo, na surpresa que cada
um de nós vivencia diante de experiências recalcitrantes. Não está claro se Misak (2004 [1991])
& 2017) propõe relações de identidade entre crenças e expectativas, ou se as considera como
distintas, apesar da alegada relação constitutiva. Defende explicitamente, no entanto, a tese de
que crenças guiariam nossas ações por meio de “hábitos de expectativas”, que são confirmados
ou frustrados por nossas experiências. É importante ressaltar que experiências possam frustrar
expectativas mesmo que nós não estejamos conscientes da presença dessas expectativas ou
crenças, caso elas difiram.
Pretendo distinguir as consequências de compreender crenças como idênticas a expectativas ou
como sendo determinadas por expectativas. Ambas as posições serão avaliadas de forma crítica a
partir de objeções que aludem a possíveis contraexemplos, como crenças a respeito de verdades
analíticas e crenças acerca de eventos passados.
The present work is concerned with the following question: why Bertrand Russell never adopted
– and apparently never even seriously considered adopting – an axiomatic account of the
principles of Set Theory?
Russell is indisputably one of the main figures in the whole history of Mathematical Logic and Set
Theory. Of all his contributions, perhaps the most well known are the formulation of the paradox
that bears his name (which showed that something was amiss with the so-called 'naive'
conception of set) and his solution to the difficulty, the theory of logical types. In the same year
in which Russell published his most famous attempt of solving his paradox - the article
Mathematical Logic as Based in the Theory of Types from 1908 - Ernst Zermelo published a
somewhat 'rival' solution to the very same difficulty: an axiomatic approach to the theory of sets.
Zermelo's Axiomatization – later improved, modified, discussed and rivaled by the approaches of
proeminent figures like Fraenkel, Skolem, von Neumann, Bernays, Gödel, Quine and many others
– quickly became the standard framework for investigations in the Foundations of Mathematics.
Surprisingly, however, Russell never published – and, as far as we know, never wrote – any
systematic or even lengthy discussion of axiomatic Set Theory. There are only tangential remarks
scattered throughout his works (for instance in My Philosophical Development) and some of his
personal correspondence (see, for instance Grattan-Guinness's Dear Russell, Dear Jourdain).
Our goal is to reconstruct a critique of Axiomatic Set Theory in the tradition started by Zermelo's
work (such as ZF, NBG, NF, ML, etc.) from some of Russell's writings on the Foundations of
Mathematics. Our conjecture is that Russell never endorsed or seriously adressed any axiomatic
approach to Set Theory because from a very early point in the development of his views (i.e., after
his discovery of his theory of incomplete symbols) he was convinced that sets and/or classes must
be treated as logical constructions. Russell's claim that classes are logical constructions resulted
from his the realization that the alternatives to this view were – or, so he thought – either logically
or philosophically unsound. On the one, for Russell, the admission of classes as entities on par
with individuals (entities that are values of first-order variables) led to paradox if adequate
primitive propositions were not assumed; on the other, the axiomatic alternative exemplified
what Russell called the method of postulation rather than of construction of mathematical
entities. Russell's reluctance to adopt such a route was not technical, but philosophical: for him
there is an ad hoc aspect to any such axiomatic approach, namely that they postulate entities
that satisfy some conception of set in order to provide Mathematics with the necessary (and,
generally, more than sufficient) amount of existential theorems.
Este trabalho reexamina três questões clássicas acerca do papel da visualização na matemática,
tendo como caso de estudo os diagramas geométricos euclidianos apresentados nos Elementos,
Recentemente foi retomado um debate sobre o papel epistêmico da memória (Lackey: 2005,
Senor: 2007, Lackey: 2007, Bernecker: 2011): a memória pode fornecer conhecimento ou
justificação? Neste trabalho apresento as principais posições filosóficas acerca da filosofia da
memória, o plano teórico de fundo de tais teorias, tendo em vista o esclarecimento dos fatores
que as fazem divergir, além de fazer algumas observações positivas em favor do geracionismo. A
posição filosófica que responde a essa questão de maneira negativa é chamada de
preservacionismo. Segundo essa posição, de acordo com Senor, uma crença memorial verdadeira
1 Tomei como modelo o modo como Elliott Sober analisa as posições canônicas acerca do livre
arbítrio e determinismo: caracterização das teorias em argumentos básicos (modus ponens e
modus tollens). Uma vez que as noções de segunda ordem sobre as teorias (compatibilismo e
incompatibilismo) permitem perceber nuances do debate mais cristalinamente, otimizando
investigações posteriores. Disponível em: https://criticanarede.com/eti_livrearbitrio2.html.