The subjective appearance of coreference is not perfectly reliable, as is familiar to anyone who has conflated twins. However, I argue that there are some thoughts which get their content wholly determined by the subjective appearance of coreference. These thoughts are formed through broadly "internal" processes like inference and imagination. They are, in a special sense, impervious to conflation. I draw some lessons for the theory of content determination and the problem of error.
<redacted for blind review, email for a copy>
Some of our distinctions are not categorical: we do not always draw the distinction, because we can see that we are in a situation where the distinct subject matters can be treated as one. For example, though we sometimes respect the distinction between weight and mass (as when we seek to understand why an astronaut can jump so high on the moon), we often ignore that distinction, speaking and thinking of a generic notion of HEAVINESS. After showing that these partial distinctions are common in science and philosophy, I develop an account of them in terms of an asymmetric substitutability between the generic concept and its disambiguations: while a generic concept like HEAVINESS can be replaced by its disambiguations MASS and WEIGHT, the converse does not hold. I show how this account can help solve a challenging problem, the problem of modeling successful communication between two people who do not draw all the same distinctions.
In Progress (email for drafts)
Misrecognition, Mental Content, and Confusion
If you see someone you think is your best friend but who turns out not to be, you are a victim of misrecognition. How should we characterize your mental state at the moment of misrecognition? One tradition treats misrecognition as mere false belief: you believe you are seeing your best friend, but you're not, end of story. But there is another view, manifesting in various early modern authors and occasionally discussed by contemporary philosophers - most notably, Ruth Millikan - which does not treat misrecognition as mere false belief. This tradition takes seriously the etymology of our ordinary word for such cases: confusion. That is, this tradition treats you as simultaneously thinking about two people: your best friend, and the person you have confused for your best friend. In this paper, I argue that this is the correct way of thinking about cases of misrecognition. I do so by leveraging recent work on confusion's inverse, the much discussed Frege case in which we fail to recognize something. I also appeal to the literature on causal overdetermination. I draw some big picture lessons for the theory of mental content.
Explanatory Goodness and Mental Content
A metasemantic theory tells us why a particular concept has the content that it in fact has, e.g., why the concept ORANGUTAN has orangutans as its content, rather than, say, Sumatran orangutans or apes. Many believe that the content of a concept has some important causal explanatory connection to that concept. But a plethora of properties stand in a causal explanatory connection to our concepts without being their contents - this is the filtering problem. In this paper, I leverage work from the general philosophy of science on causal explanation to make progress on the filtering problem. In particular, I draw on insights from the discussion of proportionality and stability to weed out problematic contents. The picture that emerges is one where much of the work of metasemantics can be accomplished by appealing to general principles in the theory of good explanation rather than the particularities of mental representation itself.
Underspecificity and Conceptual Change
Not every revision in our representational practices stems from discovery: intuitively, the revision to the practice of treating Pluto as a planet did not involve a discovery about Pluto or planets, but a decision about how we wanted to use our words and concepts going forward. While many theorists have been sympathetic to this practical account of cases like Pluto, there is no general agreement on the formalism for understanding these cases, or the philosophical basis for distinguishing practically-induced revision from epistemically-induced revision. In this paper, I leverage recent work on vagueness and communication - work on underspecificity - to provide a formal foundation for understanding practically-induced revision. I also develop a story as to what grounds the difference between decision and discovery when it comes to conceptual change.
If It Looks Like a Duck
Sometimes we use words like 'duck' to refer, not to real ducks, but toy ducks, painted ducks, people wearing duck costumes, etc. In this paper, I show how this "Toy Duck Usage" does not easily reduce to more familiar linguistic phenomena like metaphor and meaning transfer. I develop the suggestion that sometimes we unknowingly employ toy duck usage - that is, we sometimes conflate the "real" meaning of an expression with the meaning on display in paradigmatic cases of toy duck usage. I claim this unknowing conflation underlies familiar debates about Twin Earth and the meanings of natural kind expressions, suggesting that uses of 'water' that apply to XYZ are fundamentally a form of toy duck usage. I leverage research on dual character concepts, lexical polysemy, and natural kind concepts to argue that all kind concepts - whether they are "natural" or "artifactual" - harbor a dimension according to which explanatory origin is important, and another dimension according to which functional profile is important. Toy duck usage exploits the latter dimension.