top of page

Chicago Area Philosophical Exchange
(CAPHE)

The aim of CAPHE is to foster collaboration, facilitate communication, and build community among the many philosophers in Chicago and its surrounding areas with research interests in contemporary theoretical philosophy.

Organizers: Ray Briggs, Kevin Davey, Mikayla Kelley, Ginger Schultheis, Anubav Vasudevan, Malte Willer

Autumn 2024 Schedule
[Meetings take place 11-1 in Wieboldt 408 (UChicago campus) unless otherwise noted.]

Friday, October 11

LLMs Can Never Be Ideally Rational

Simon Goldstein (University of Hong Kong)

LLMs have dramatically improved in capabilities in recent years. This raises the question of whether LLMs could become genuine agents with beliefs and desires. This paper demonstrates an in principle limit to LLM agency, based on their architecture. LLMs are next word predictors: given a string of text, they calculate the probability that various words can come next. LLMs produce outputs that reflect these probabilities. I show that next word predictors are exploitable. If LLMs are prompted to make probabilistic predictions about the world, these predictions are guaranteed to be incoherent, and so Dutch bookable. If LLMs are prompted to make choices over actions, their preferences are guaranteed to be intransitive, and so money pumpable. In short, the problem is that selecting an action based on its potential value is structurally different then selecting the description of an action that is most likely given a prompt: probability cannot be forced into the shape of expected value. The in principle exploitability of LLMs raises doubts about how agential they can become. This exploitability also offers an opportunity for humanity to safely control such AI systems.

Friday, October 25

Accurracy Maximization for Causal Decision Theory

Melissa Fusco (Columbia University)

Causal decision theorists update by conditionalization, just like evidential decision theorists and rational pure observers do. But should they? Imaging (Lewis, 1976), based on Stalnaker’s selection function semantics for the subjunctive conditional, can, after all, be treated as a recipe for update (Gärdenfors, 1982). And in decision-theoretic contexts, a longstanding—though not popular— gloss on imaging does invoke normative update: conditioning, the story goes, is the epistemically correct response to learning A, while imaging is the epistemically correct response to making A the case. Of course, to be a genuine rival is one thing; to be a viable rival is quite another. Updating by imaging might seem like a born loser: susceptible to both a Dutch Book argument (Teller, 1973) and to the well-known result that conditionalization is the only update rule that maximizes accuracy (Greaves & Wallace, 2006). Despite these standard results, in this paper I do argue that imaging is the rational way to update one’s credences in some contexts of choice.

Friday, November 8

The Logical Firmament

Mike Titelbaum (UW-Madison)

When someone with a firm understanding of the elementary operations nevertheless remains ignorant of a complex logical or mathematical truth, precisely what kind of information are they missing? I identify a new category of truths, “catenary truths”, that accounts for a great deal of such non-omniscience. Most epistemologies of the a priori don’t extend to catenary knowledge, so I offer a novel proposal for how we acquire catenary information. The proposal answers Benacerraf-inspired worries about access to abstracta by suggesting that processes of reasoning instantiate catenary truths. The proposal also sheds new light on whether logic is ampliative, how a calculation is like an experiment, higher-order doubts about deductive reasoning, the inconceivability of logically impossible worlds, and commonalities between mathematical and moral intuition.

Friday, December 6

Learning 'If'

Calum McNamara (Yale/Indiana)

A prominent challenge for Bayesians is to say how your credences should change when you learn a indicative conditional. A number of cases in the literature seem to show that the standard Bayesian update rules, conditionalization and Jeffrey conditionalization, give implausible results when you learn conditionals of this kind. The most famous of these cases is Bas van Fraassen’s Judy Benjamin problem. There, it’s claimed that if, after learning an indicative conditional, your credences satisfy some intuitive desiderata, then you can’t be updating in accordance with standard Bayesianism. In response to this case, some authors have argued against van Fraassen’s desiderata, while others have proposed new updating rules, intended to supplement standard Bayesianism. In this paper, I offer a different kind of response. I first draw a connection between the Judy Benjamin problem, on the one hand, and the thesis known as Stalnaker’s thesis, on the other. Stalnaker’s thesis—which relates your credences in indicative conditionals to your conditional credences—was for a long time thought to be untenable, owing to the famous triviality results of David Lewis and others. However, recent work has shown that, given a particular semantics for indicative conditionals—a sequence semantics which, ironically, was first developed by van Fraassen himself—Stalnaker’s thesis is tenable after all. Here, I adopt the same semantics in order to rebut van Fraassen’s observations in Judy Benjamin. I first show that, given this semantics, the standard Bayesian update rules can satisfy all of the intuitive desiderata in that case, contrary to what van Fraassen claimed. I then show that alternatives to the Bayesian update rules, intended to handle cases like Judy Benjamin, actually turn out to be equivalent to those rules, according to this semantics—at least in many contexts. Thus, what we end up with is a nice, unified account of rational learning—one which fits well with recent work on the semantics of conditionals, and on Stalnaker’s thesis more specifically.

bottom of page