Friday, December 19, 2008

Meyer on Field-style Reformulations of Statistical Mechanics

Glen Meyer offers an in-depth discussion of Field's program to nominalize science with special emphasis on the challenges encountered with classical equilibrium statistical mechanics (CESM). He makes a number of excellent points along the way, but what I like most is his focus on the prevalence of an appeal to what some call "surplus" mathematical structure, i.e. mathematics that has no natural physical interpretation. As he argues, Field could reconstruct configuration spaces for point particles using physical points in space-time, but would face difficulties extending this approach to phase spaces and probability distributions on phase spaces.

One novelty of the paper is a distinction between interpretation and representation. Mathematical theories have some mathematical terms with a semantic reference and a representational role, but other mathematical terms may have a semantic reference with no representational role. When idealizations involve this latter kind of term, Field-style reformulations are in trouble. For example, Meyer discusses the need to treat certain discrete quantities as continuous in the derivation of the Maxwell-Boltzmann distribution law:
The intended ('intrinsic') interpretations of axioms describing a certain structure forces that structure to represent, as it were, in its entirety, i.e., that this structure be exemplified in the subject matter of the theory. Any introduction of the idealization above at the nominalistic level will therefore force us to adopt assumptions about the physical world that the platonistic theory, despite its use of this idealization, does not make. Unlike the case of point particles, this idealization is not part of the nominalistic content of the platonistic theory and therefore does not belong in any nominalistic reformulation. Without it, however, we have no way of recovering this part of CESM (p. 37).
Here we have a derivation that ordinary theories can ground, but that Field-style nominalistic theories cannot. I agree with Meyer here, but it raises the further issue: why is it so useful to make these sorts of non-physical idealizations? It may just be a pragmatic issue of convenience, or perhaps there is something deeper to say about how the mathematics contributes without representing? (See Batterman's recent paper for one answer.)

Tuesday, December 16, 2008

The Limits of Causal Explanation

Woodward's interventionist conception of causal explanation is perhaps the most expansive and well-worked out view on the market. He conceives of a causal explanation as providing information about how the explanandum would vary under appropriate possible manipulations. Among other things, this allows an explanatory role to phenomenological laws or other generalizations that support the right kind of counterfactuals, even if they do not invoke any kind of fundamental or continuous causal process.

Given the recent debates on mathematical explanation of physical phenomena, it's worth wondering if Woodward's account extends to these cases as well. In a short section in the middle of the book, he concedes that not all explanations are causal in his sense:
it has been argued that the stability of planetary orbits depends (mathematically) on the dimensionality of the space-time in which they are situated: such orbits are stable in four-dimensional space-time but would be unstable in a five-dimensional space-time ... it seems implausible to interpret such derivations as telling us what will happen under interventions on the dimensionality of space-time (p. 220).
More generally, when it is unclear how to think of the relevant feature of the explanadum as a variable, Woodward rejects the explanation as causal.

Still, some mathematical explanations will qualify as causal. This seems to be the case for Lyon and Colyvan's phase space example, but perhaps not for the Konigsberg bridges case I have sometimes appealed to. To see the problem for the bridge case, recall that the crucial theorem is
A connected graph G is Eulerian iff every vertex of G has even valence.
As the bridges form a graph like the figure, they are non-Eulerian, i.e. no circuit crosses each edge exactly once.



I would argue, though, that as with the space-time example, there is no sense in which a possible intervention would alter the bridges so that they were Eulerian. We could of course destroy some bridges, but this would be a change from one bridge system to another bridge system. It seems that to support this position, there must be clear set of essential properties of the bridge system that are not rightly conceived as variable.

Monday, December 15, 2008

Globe and Mail Drops the Ball

If you needed any more evidence that the general public, including Toronto's Globe and Mail newspaper, doesn't know what "philosophy" means, check out this profile of a self-described "cyberphilosopher" and his campaign to expose the dark side of the parent-child relationship:
The philosophy behind this is codified and has its own lingo. For example RTR (Real-Time Relationship) means you're willing to confront a spouse or parent if you feel they're hurting you. "If you don't want to be a slave, stop acting like a slave," he writes.
One might have hoped for at least one sentence in the article explaining that this has nothing to do with philosophy.

Monday, November 24, 2008

Science Communicators?

Today's Guardian includes an interesting piece by Jim Al-Khalili on the role of what he calls "science communicators" in the public sphere. While one of their obvious functions is to communicate scientific developments in a way that an educated public can understand, he suggests a broader role:
I do feel strongly however that those scientists who have a voice must be doing more than simply popularising their field to attract the next generation into science. Yes, this is vital; but it is also vital that we help defend our rational, secular society against the rising tide of irrationalism and ignorance. Science communicators, for want of a better term for now, have a role to play in explaining not just the scientific facts but how science itself works: that it is not just "another way of viewing the world"; and that without it we would still be living in the dark ages.
Perhaps here is another place where philosophers of science can also make a contribution? There was good evidence of the potential for this at the PSA. While I missed several sessions on science and the public, there was a great series of papers relating to medical issues and broadly feminist epistemology. Susan Hawthorn, for example, gave an illuminating reconstruction of history and current practice of ADHD medicine, and Intemann and de Melo-Martin offered a critical reconstruction of the science behind the highly publicized HPV vaccine.

Friday, November 14, 2008

Post-blogging the PSA: Gauge Freedom and Drift


It's taken me a few days to recover from the excellent PSA. I talked to many people who had a great time and who thought this year's program was exceptionally well-balanced to reflect both old classics and new debates in philosophy of science.

On the first day I was happy to attend two sessions which reflect the interpretative difficulties arising from the central role of some mathematics. In the first session, Richard Healey summarized his paper "Perfect Symmetries", followed by Hilary Greaves' and David Wallace's attempts to critically reconstruct Healey's central argument. Very roughly, Healey aims to distinguish cases where a symmetry in the models of a theory explains observed empirical symmetries in physical systems from cases where there are theoretical symmetries with no analogous explanatory power. In the latter case, the theoretical symmetries may just amount to 'mathematical fluff' or 'surplus structure' that lack physical significance.

Then it was time for some biology and the symposium "(Mis)representing Mathematical Models in Biology". The session began with biologist Joan Roughgarden's summary of different kinds of models in biology, followed by Griesemer, Bouchard and Millstein talking about different issues in their interpretation. Both Griesemer and Millstein emphasized the importance of a biologically grounded understanding of the components of a biological model, and argued that a merely mathematical definition of such components would block our understanding of biological systems. Millstein was especially emphatic (to quote from a handout from a previous presentation of hers) "Selection and drift are physical, biological phenomena; neither is a mathematical construct." That is, when we look at the changes in some biological system over time, we cannot think of the changes as resulting from a genuine process of selection with some additionally mathematically represented divergence from some ideal that we label as "drift". Instead, drift itself must be countenanced as a genuine process that makes its own positive contribution to what we observe in nature.

While it is a bit of a stretch, there is at least a suggestive analogy between these debates in physics and biology: in both cases, we have a useful and perhaps indispensable mathematically identified feature of our theories whose physical and biological status can remain in doubt, even for our best, current scientific theories. Here, it seems to me, we see some of the costs of deploying mathematics.

The Deep Blue of Poker

New Scientist reports on the success of a new (Canadian!) poker program, Polaris, at beating some top poker players. As with Deep Blue, this success is not exactly decisive as it was restricted to one-on-one play of Limit Texas Hold 'Em. This is surely the easiest version of poker to master.

The article mentions both the lack of perfect information and the contextual nature of the best play as complications in programming poker programs: "One of the fundamental problems for any poker player is that the best strategy varies, depending on your opponent's style of play." While it would be easy to see how simple calculations would handle the lack of information, I am more interested in seeing how programmers can deal with the interactions between playing style and optimal no-limit betting!

Sunday, November 9, 2008

New Book: Collected Works of Carnap, Vol. 1

One of the best parts of the PSA was the reception hosted by Open Court as part of their launch of the Collected Works of Carnap. The first volume was actually there, and should soon be available for purchase. This volume is perhaps one of the most important as it provides English translations of Carnap's early work for the first time, including his doctoral dissertation. There is an extensive introduction and carefully compiled textual notes.

Real fans of Carnap will like the detailed chronology of Carnap's life. Some things that I never knew before: (i) in WWI, Carnap was initially assigned to the Carpathian mountains because of his skiing ability, (ii) in 1929 Carnap was advised not to publish his paper "On God and the Soul" because "it will make it impossible for him to get a job at a philosophy department anywhere in Germany" (xxxv), and (iii) in 1936 Carnap turned down an offer from Princeton to take up a position at the University of Chicago.

The whole team of editors is to be thanked for their excellent work. Only 12 more volumes left for them to complete!

Is the Standard Model in Trouble?

Maybe everyone who cares about this has already heard, but over the last week there has been an important posting on experiments done at Fermilab that the scientists are having difficulty interpreting. As Physics World reports,
Physicists at the Tevatron collider at Fermilab in the US, which is enjoying extended status as the world’s most powerful particle collider while CERN’s Large Hadron Collider (LHC) awaits repair, have reported signals in their data that hint at the existence of new fundamental particles. Last week members of the CDF experiment, one of the Tevatron’s two huge particle detectors, posted a preprint detailing a large sample of proton–antiproton collisions that cannot be accounted for either by quirks of the CDF detector or by known processes in the standard model of particle physics (arXiv:0810.5357, submitted to PRD).
I am certainly not the person to explain what the anomaly is or how it can be interpreted, but there are a number of interesting features of this development from an HPS perspective. Among other things, there seems to be some disagreement among the members of the large research group about going public with the result at this stage. Looking ahead, there will surely be many accounts of the data, and apparently an attempt to replicate the results at DZero.

For more from the physics blogosphere, see Not Even Wrong.

Tuesday, October 28, 2008

PSA Symposium: Applied Mathematics and the Philosophy of Science

As the final version of the PSA program is finally online, it is about time for me to promote the symposium that I will be in. Here are the details:

Applied Mathematics and the Philosophy of Science
PSA 2008 Symposium
Parallel Session 6: Saturday, November 8, 9-11:45 am
Room CCA (Conference Center A)
Chair: Paul Teller

Proposed schedule:
9:00-9:30 Christopher Pincock, “The Value of Mathematics for Scientific Confirmation”
9:30-10:00 Stathis Psillos, “What If There Are No Mathematical Entities? Lessons for Scientific Realism”
10:00-10:20 discussion
10:20-10:25 break
10:25-10:55 Mark Wilson, “Leibniz’ ‘Possibilities’ and Our Own”
10:55-11:25 Robert Batterman, “Essential Models and Explanatory Mathematics”
11:25-11:45 discussion

Abstract: This symposium will explore the relevance of philosophical reflection on the details of applied mathematics for current debates in the philosophy of science along four dimensions: (i) scientific representation, (ii) confirmation of scientific theories, (iii) idealization and scientific explanation, (iv) scientific realism. In all four cases the participants aim to show that a clear focus on the contribution that mathematics makes to science sheds new light on traditional positions in the philosophy of science. In some cases the viability of a philosophical view is called into question, while in others a standard thesis receives new support. The symposium is motivated by the realization that the philosophy of mathematics has changed considerably in the last twenty years and the hope that philosophers of science can benefit from this transformation.

For those of you who can't be there, here is a link to a rough draft of my paper. Constructive comments appreciated! Update (April 11, 2009): I have removed this old draft and hope to repost a final version sometime this spring.

Sunday, October 26, 2008

Downward Causation in Fluids?

Bishop claims to have found a case of downward causation in physics based on the existence of what is known as Rayleigh-Benard convection in fluids. In the simplest case we have a fluid like water that is heated from below. What can result, as this image from Wikipedia shows, is a series of cells, known as Benard cells, where the dominant large-scale structure is fluid flowing in interlocking circular patterns.

The claim is that these patterns require new causal powers over and above what can be ascribed to the smaller scale fluid elements: "although the fluid elements are necessary to the existence and dynamics of Benard cells, they are not sufficient to determine the dynamics, nor are they sufficient to fully determine their own motions. Rather, the large-scale structure supplies a governing influence constraining the local dynamics of the fluid elements" (p. 239).

There is no doubt that this is an interesting case that should receive more scrutiny. As with McGivern's article, the tricky interpretative question is how closely we should link the workings of the mathematical model to the genuine causes operating in the system. Bishop's conclusion seems based on taking the representation of fluid elements very seriously, but I am not sure that the link between the representation and reality at this level is well enough understood. Still, I would concede his point that many features of downward causation from philosophical accounts appear in this example.

Wednesday, October 22, 2008

Two Papers on Modeling

I have recently posted two preprint versions of papers that approach modeling from (hopefully) complementary directions. The first, "Modeling Reality", argues that model autonomy and model pluralism are consistent with a limited form of scientific realism. The second, "Towards a Philosophy of Applied Mathematics", argues that applied mathematics is a distinct area of mathematics that deserves further scrutiny by philosophers of mathematics.

Comments are welcome as both papers are part of my larger project!

Thursday, October 16, 2008

Math Education Humor

The Onion offers some math education humor which can probably apply equally well to some introductory logic courses out there.

Tuesday, October 14, 2008

Maddy on Applied Mathematics

In her recent Review of Symbolic Logic article “How Applied Mathematics Became Pure” Maddy offers a rich discussion of the various changes that have occurred in mathematics, science and their relationship. While I am generally sympathetic to her main conclusion that mathematics as it is practiced today has its own science-independent standards of success, I am surprised by her pessimistic conclusions concerning the sort of project that I am engaged in for applications.

Maddy first summarizes the history of scientists who took a modest perspective on the degree to which their mathematical representations were capturing ultimate causal mechanisms:
we have seen how our best mathematical accounts of physical phenomena are not the literal truths Newton took them for but freestanding abstract models that resemble the world in ways that are complex and sometimes not fully understood (p. 33).
She continues that
One clear moral for our understanding of mathematics in application is that we are not in fact uncovering the underlying mathematical structures realized in the world; rather, we are constructing abstract mathematical models and trying our best to make true assertions about the ways in which they do and do not correspond to the physical facts (p. 33).
After surveying some successful accounts of particular cases where we can make these distinctions, she concludes
Given the diversity of the considerations raised to delimit and defend these various mathematizations, anything other than a patient case-by-case approach would appear singularly unpromising (p. 35).
But nothing in the article precludes a useful classification of these sorts of successes into kinds. Of course, such a classification must start with individual cases. This would be just the beginning, especially if we could find patterns across cases. Indeed, it seems like this is just what applied mathematicians are trained to do, as a review of any applied mathematics textbook would reveal.

I grant that this is just a promissory note at this stage, but the attempt to understand and classify successful cases of mathematical modeling is really just another instance of the naturalistic methods that Maddy has applied to set theory.

Sunday, October 12, 2008

McGivern on Multiscale Structure

Back in July I made a brief post on multiscale modeling from the perspective of recent debates on modeling and representation. So I was very happy to come across a recent excellent article by McGivern on “Reductive levels and multi-scale structure”. McGivern gives a very accessible summary of a successful representation of a system involving two time scales, and then goes on to use this to question some of the central steps in Kim’s influential argument against nonreductive physicalism.

To appreciate the central worry, we need the basics of his example. McGivern discusses the case of a damped harmonic oscillator, like a spring suspended in a fluid, where the damping is given as a constant factor of the velocity. So, instead of the simple linear harmonic oscillator
my’’ + ky = 0
we have
my’’ + cy’ + ky = 0
Now this sort of system can be solved exactly, so a multiscale analysis is not required. Still, it is required in other cases, and McGivern shows how it can lead to not only accurate representations of the evolution of the system but also genuine explanatory insight into its features. In this case, we think of the spring evolving according to two time scales, t_s and t_f, where t_f = t and t_s = εt and ε is small. Mathematical operations on the original equation then lead to
y(t) ~ exp(-t_s/2)cos(t_f)
where ~ indicates that this representation of y is an approximation (essentially because we have dropped terms that are higher-order in ε). McGivern then plots the results of this multiscale analysis against the exact analysis and shows how closely they agree.

McGivern’s argument, then, is that the t_s and the t_f components represent distinct multiscale structural properties of the oscillator, but that they are not readily identified with the “micro-based properties” championed by Kim. McGivern goes on to consider the reply that these are not genuine properties of the system, but merely products of mathematical manipulation. This seems to me to be the most serious challenge to his argument, but the important point is that we need to work through the details to see how to interpret the mathematics here. I would expect that different applications of multiscale methods would result in different implications for our metaphysics. I hope that this paper will be studied not only by the philosophy of mind community, but also by people working on modeling. If we can move both debates closer to actual scientific practice, then surely that will be a good thing!

Thursday, October 9, 2008

Norton on Pincock

Over at The Last Donut, John Norton offers a very generous summary of my recent lunchtime talk at the Pittsburgh Center for the Philosophy of Science. I hope to have a revised draft online soon!

Wednesday, October 8, 2008

Intuition of Objects vs. Holism

In a previous post I wondered what the role of intuition of quasi-concrete objects like stroke-inscriptions really was in Parsons' overall epistemology of mathematics. After finally finishing the book, it seems that one clear role, at least, is Parsons' objections to holism of the sort familiar from Quine and championed in more detail for mathematics by Resnik. In the last chapter of the book Parsons makes this point:
Intuition does play a role in making arithmetic evident to the degree that it is, in that there is a ground level of arithmetic, not extending very far, that is intuitively evident. Furthermore, the objects that play the role of numbers in this low-level arithmetic can continue to do so in a more full-blooded arithmetic theory.
After noting that logical notions allow this further extension, he insists that
the role of intuition does not disappear, because it is central to our conception of a domain of objects satisfying the principles of arithmetic ... an intuitive domain witnesses the possibility of the structure of the numbers (336).
Here, then, we have a definite epistemic role for intuition of objects. It helps us to explain what is different about arithmetic, or at least the fragment of arithmetic that is closely related to these intuitions. (In chapter 7, this fragment is said to not even include exponentiation, so it fars fall short of PRA.)

While this objection to holism is quite persuasive, Parsons is at pains to emphasize how modest it really is. He offers some additional discussion of the implications for set theory, but the book seems primarily focused on what distinguishes arithmetic from other mathematical theories. It is an impressive achievement that I am sure will frame much of philosophy of mathematics for a long time.

Wednesday, September 24, 2008

Heis on Reed on Kant and Frege

Jeremy Heis has a useful review of Reed's recent book on Kant and Frege. Heis raises a number of issues for the way Reed structures his discussion, so the review is also helpful as a road map for where future discussion should go.

More on Traffic

The article cited in the last post makes a number of interesting claims about the "phase transition" from free flow traffic to a traffic jam. While such transitions had been reproduced in simulations, it was apparently only recently that they have been reproduced in experimental contexts. Exactly what this shows about traffic and phase transitions more generally is less than clear, but the video is worth watching a few times.

Saturday, September 20, 2008

Traffic and Shock Waves

As explained in elementary terms here traffic can be modeled using a density function ρ(x, t) and a flux function j(ρ(x, t)), i.e. we assume that the number of cars passing through a point at a given time is a function of the density of cars at that point. Making certain continuity assumptions, we can obtain a conservation law

ρ_t + j’(ρ)ρ_x = 0

where subscripts indicate partial differentiation and j’ indicates differentiation with respect to ρ. If we make j(ρ)=4ρ(2-ρ) and start with a discontinuous initial density distribution like

1 if x <= 1
1/2 if 1< x <=3
2/3 if x > 3

Then we can show how the discontinuity persists over time and changes.

Such persisting discontinuities are called shock waves and appear as lines across which the density changes discontinuously. For example, in this figure we have lines of constant density intersecting at x = 3. The philosophical question is "what are we to make of this representation of a given traffic system?" That is, what does the system have to be like for the representation of a shock wave to be correct? My suggestion is that we need only a thin strip around x = 3 where the density changes very quickly, i.e. so quickly that a driver crossing it would have to decelerate to zero speed. Then, on the other side of the strip, the driver experiences a dramatic drop off in density, and so can accelerate again. Still, there is something a bit strange in talking about shock waves in traffic cases where the number of objects involved is so small, as opposed to fluid cases where many more fluid particles interact across a shock wave. Here, then, I would suggest that we have a case where the mathematics works, but we are less than sure what it is representing in the world.

See this New Scientist article (and amusing video) for the claim that shock waves can be observed in actual traffic experiments (summarizing this 2008 article).

Sunday, September 14, 2008

Steven Weinberg on "Without God"

Physicist Weinberg offers some extended reflections ($) on science and religion before concluding that "there is a certain honor, or perhaps just a grim satisfaction, in facing up to our condition without despair and wishful thinking". Sensible advice, especially in the wake of David Foster Wallace's grim demise. (Carroll has posted a Wallace passage on Cantor.)

On a lighter note, Weinberg offers an amusing analogy with religion without religious belief:
To compare great things with small, people may go to college football games mostly because they enjoy the cheerleading and marching bands, but I doubt if they would keep going to the stadium on Saturday afternoons if the only things happening there were cheerleading and marching bands, without any actual football, so that the cheerleading and the band music were no longer about anything (75).

Zach is Back!

After an inexcusable period of light posting, Richard Zach is back on track with a host of updates on the world of logic, philosophy of mathematics and beyond. Especially useful is this summary of the new issue of the Review of Symbolic Logic.

Thursday, September 4, 2008

Rayo's "On Specifying Truth-Conditions"

This is a long, innovative and frustrating article (Phil. Review 117 (2008): 385-443), at least for someone like me who is not at all inclined to fictionalism or other non-standard approaches to mathematical language. But for those, like Rayo, who think that “expressions with identical syntactic and inferential roles can perform different semantic jobs” (415fn22) this paper may represent the current state of the art.

Rayo aims to defend the stability or coherence of a form of noncommittalism according to which the use of mathematical language does not commit one to the existence of any abstract mathematical objects. An important negative point that Rayo makes early on is that the noncommittalist is unable to accomplish this task by “translating each arithmetical sentence into a sentence that wears its ontological innocence on its sleeze” (385). Still, the positive program is to set out a method of specifying associated truth conditions for these sentences so that these conditions do not involve abstract objects, but only the world being a certain way. The key innovation, if I am not misunderstanding the paper, is to employ the full range of semantic machinery in explaining how the world has to be for a given sentence to be correct. Rayo carefully explores the issue of whether this is legitimate, and concludes that if one begins as a noncommittalist, then one will feel entitled to appeal to numbers and functions in one’s semantic theory. As a result, the noncommittalist can claim an internally coherent or stable package of views.

While much of the interest of the paper is in the details, the following frank admission by Rayo struck me as worth highlighting:
Whatever its plausibility as an explanation of how the standard arithmetical axioms might be rendered meaningful in such a way that their truth is knowable a priori, it should be clear that the arithmetical stipulation is not very plausible as an explanation of how the axioms were actually rendered meaningful or how it is that we actually acquire a priori knowledge of their truth (431-432).
A committalist seems well within her rights to wonder, then, what the point is of carving out a stable position like Rayo’s if it has no bearing on our actual mathematical knowledge or on the actual contribution that mathematics makes in applied contexts.

Sunday, August 31, 2008

Book Project: Mathematics and Scientific Representation

This year I will be trying to come up with a draft of a book that I have been planning for some time, but that I have been quite unsure how to organize. The general issue is the prevalence of mathematics in science and whether there is a philosophical problem lurking here that can be productively discussed. My current angle of attack is to focus on the contribution of this or that part of mathematics to a particular scientific representation. ("Representation" is meant to include both theories and models.) So, we can ask for a given physical situation, context, and mathematical representation of the situation, (i) what does the mathematics contribute to the representation, (ii) how does it make this contribution and (iii) what must be in place for this contribution to occur?

To avoid devolving into a list of examples, I am also trying to come up with different sorts of representations and different ways that mathematics might contribute. At the moment these are: (i) the math is intrinsic/extrinsic to the content of the representation, (ii) the representation is causal concrete or abstract acausal, (iii) the representation is concrete fixed (i.e. a fixed interpretation) or abstract varying, (iv) the scale of the representation and (v) the global (as in a constitutive framework) or local character of the representation. In future posts I will try to clarify these dimensions and offer examples of different ways in which mathematics contributes to them.

The main point of the book, though, is to argue that the contribution that mathematics makes in all these different kinds of cases can generally be classified as epistemic. That is, mathematics helps us to formulate representations that we can confirm given the data we can actually collect. So, there will be many issues related to confirmation and epistemology that I will try to explore here as well.

Pointers to similar projects or projects pursuing this issue in a different way are welcome!

Wednesday, August 27, 2008

Explaining Clumps via Transient Simulations

Following up the previous post on Batterman and mathematical explanation, here is a case where a mathematical explanation has been offered of a physical phenomenon and the explanation depends on not taking asymptotic limits. The phenomenon in question is the "clumping" of species around a given ecological niche. This is widely observed, but conflicts with equilibrium analyses of the relevant ecological models which instead predict a single species to occupy a single niche.

As Nee & Colgrave reported in 2006 (Nature. 441(7092):417-418), Scheffer & van Nes (DOI: 10.1073/pnas.0508024103) overcame this unsatisfactory state of affairs by running simulations that examine the long-term, but still transient, behavior of the same ecological models. This successfully reproduced the clumping observed in ecological systems:
Analytical work looks at the long-term equilibria of models, whereas a simulation study allows the system to be observed as it moves towards these equilibria ... The clumps they observe are transient, and each will ultimately be thinned out to a single species. But 'ultimately' can be a very long time indeed: we now know that transient phenomena can be very long-lasting, and hence, important in ecology, and such phenomena can be studied effectively only by simulation (417).
While the distinction between analysis and simulation seems to me to be a bit exaggerated, the basic point remains: we can sometimes explain natural phenomena using mathematics only by not taking limits. Limits can lead to misrepresentations just as much as any other mathematical technique. More to the point, explanatory power can arise from examining the non-limiting behavior of the system.

Friday, August 22, 2008

Batterman on "The Explanatory Role of Mathematics in Empirical Science"

Batterman has posted a draft tackling the problem of how mathematical explanations can provide insight into physical situations. Building on his earlier work, he emphasizes cases of asymptotic explanation where a mathematical equation is transformed by taking limits of one or more quantities, e.g. to 0 or to infinity. A case that has received much discussion (see the comment by Callender in SHPMP) is the use of the “thermodynamic limit” of infinitely many particles in accounting for phase transitions. In this paper Batterman argues that “mapping” accounts of how mathematics is applied, presented by me as well as (in a different way) Bueno & Colyvan, are unable to account for the explanatory contributions that mathematics makes in this sort of case.

I would like to draw attention to two claims. First, “most idealizations in applied mathematics can and should be understood as the result of taking mathematical limits” (p. 9). Second, the explanatory power of these idealizations is not amenable to treatment by mapping accounts because the limits involve singularities: “Nontraditional idealizations [i.e. those ignored by traditional accounts] cannot provide such a promissory background because the limits involved are singular” (p. 20). Batterman has made a good start in this paper arguing for the first claim. The argument starts from the idea that we want to explain regular and recurring phenomena. But if this is our goal, then we need to represent these phenomena in terms of what their various instantiations have in common. And it is a short step from this to the conclusion that what we are doing is representing the phenomena so that it is stable under a wide variety of perturbations of irrelevant detail. We can understand the technique of taking mathematical limits, then, as a fancy way of arriving at a representation of what we are interested in.

Still, I have yet to see any account of why we should expect the limits to involve singularities. Of course, Batterman’s examples do involve singularities, but why think that this is the normal situation? As Batterman himself explains, “A singular limit is one in which the behavior as one approaches the limit is qualitatively different from the behavior one would have at the limit”. For example, with the parameter “e”, the equation ex^2 – 2x – 2 = 0 has two roots for e ≠ 0, and one root for e = 0. So, the limit as e goes to 0 is singular. But the equation x^2 – e2x – 2 = 0 has a regular limit as e goes to 0 as the number of roots remains the same. So, the question remains: why would we expect the equations that appear in our explanations to result from singular, and not regular, limits?

Batterman makes a start on an answer to this as well, but as he (I think) recognizes, it remains incomplete. His idea seems to be that singular limits lead to changes in the qualitative behavior of the system and that in many/most cases our explanation is geared at this qualitative change. Still, just because singular limits are sufficient for qualitative change it does not follow that all or even most explanations of qualitative change will involve singular limits. Nevertheless, here is an important perspective on stability analysis that I hope he will continue to work out.

Saturday, August 16, 2008

Carroll & Albert on Many Worlds

Again following Leiter by a few days, there is an excellent 67-minute discussion of quantum mechanics between physicist Sean Carroll and philosopher of physics David Albert at bloggingheads. The first 41 minutes are an ideal introduction for people who have not been exposed to the issues before, but beginning here there is a more advanced investigation of the prospects for accounting for probabilities in many worlds.

Two moments to watch for: (i) Albert's remark that the only reason to write books is to regret writing them and (ii) Carroll's brief expressions of bemusement as Albert sharpens philosophical distinctions like reasons vs. causes.

Friday, August 15, 2008

Lyon & Colyvan on Phase Spaces

In their recent article “The Explanatory Power of Phase Spaces” Aidan Lyon and Mark Colyvan develop one of Malament’s early criticisms of Field’s program to provide nominalistic versions of our best scientific theories. Malament had pointed out that it was hard to see how Field’s appeal to space-time regions would help to nominalize applications of mathematics involving phase spaces. As only one point in a given phase space could be identified with the actual state of the system, some sort of modal element enters into phase space representations such as Hamiltonian dynamics where we consider non-actual paths. Lyon and Colyvan extend this point further by showing how the phase space representation allows explanations that are otherwise unavailable. They focus on the twin claims that
All galactic systems that can be modeled by the Henon-Heiles system with low energies tend to exhibit regular and predictable motion;
All galactic systems that can be modeled by the Henon-Heiles system with high energies tend to exhibit chaotic and unpredictable motion.
The mathematical explanation of these claims involves an analysis of the structure of the phase spaces of a Henon-Heiles system via Poincare maps. As the energy of such a system is increased, the structure changes and the system can be seen to become more chaotic.

For me, the central philosophical innovation of the paper is the focus on explanatory power, and the claim that even if a nominalistic theory can represent the phenomena in question, the nominalistic theory lacks the explanatory power of the mathematical theory. This is an intriguing claim which seems to me to be largely correct. Still, one would want to know what the source of the explanatory power really is. Lyon and Colyvan focus on the modal aspects of the representation, and claim that this is what would be missing from a nominalistic theory. But it seems that a Field-style theory would have similar problems handling cases of stability analysis where phase spaces are absent. For example, I have used the example of the Konigsberg bridges where the topology of the bridges renders certain sorts of paths impossible. There is of course a modal element in talking of impossible paths, but the non-actual paths are not part of the representation in the way that they appear in phase spaces. What the bridges have in common with this case is that a mathematical concept groups together an otherwise disparate collection of physical phenomena. While all these phenomena may be represented nominalistically, there is something missing from this highly disjunctive representation. I am not sure if what is lost is best characterized as explanatory power, but something is surely worse.

Three different elements come together, then, in Lyon and Colyvan’s case, and it is not clear which contribute to explanatory power: (i) non-actual trajectories in a phase space, (ii) a mathematical concept that groups together a variety of physical systems (“the Henon-Heiles system”) and (iii) stability analysis. Maybe they all make a contribution, but more examples are needed to see this.

Thursday, August 14, 2008

Sarkar, Fuller and Negative Reviews

NDPR recently published a review of Fuller's recent book by Sarker. As noted by Leiter, the review concludes with
These excursions into fancy allow me to end on a positive note: the lack of depth or insight in this book is more than compensated by the entertainment it provides, at least to a philosopher or historian of science. No one should begrudge us our simple pleasures. I'm happy to have read this book, and even more so not to have paid for it.
While entertaining, I wonder whether such negative attacks are really worthwhile. I myself have not been immune to the temptation to put down a book that I thought should never have been published, as with my remarks (also in NDPR) that
Its main virtue may be to stand as a cautionary example of how not to write a history of analytic philosophy.
But what do such reviews achieve? Perhaps the thought is that they send a message to the philosophical community that this book is not worth reading or even buying. Reviews, then, act as a kind of gatekeeper that warn off naive readers who might otherwise make the mistake of thinking the book is somehow onto something. But an equally effective means to achieve this end might just be to not review the book in the first place. NDPR reviews a lot of books, but even here there is no need to be comprehensive and the mere fact that a book is slated for review already confers on it some special status.

The problem, of course, is that most of us (including myself) agree to review a book before we read it and decide if it is worth reviewing. So, I propose that editors allow potential book reviewers an out: they can review a book in a given time-frame, or else decide not to submit a review by some deadline. For this to actually stop bad books from being reviewed, the editor would have to agree to abide by the "no review" verdict.

Sunday, August 10, 2008

Mathematics, Proper Function and Concepts

Continuing the previous post, I think what bothers me about the proper function approach is the picture of cognitive faculties as innate. This forces the naturalist to account for their origins in evolutionary terms, and this seems hard to do for the faculties that would be responsible for justified mathematical beliefs. A different approach, which should be compatible with the spirit of the proper function approach, would be to posit only a few minimally useful innate faculties along with the ability to acquire concepts. If we think of concepts as Peacocke and others do, then acquiring a mathematical concept requires tacitly recognizing certain principles involving that concept (an "implicit conception"). Then, if a thinker generated a mathematical belief involving that concept and followed the relevant principles, we could insist that the belief was justified.

This would lead to something like:
Jpfc: S’s belief B is justified iff (i) S does not take B to be defeated, (ii) the cognitive faculties producing B are (a) functioning properly, (b) truth-aimed and (c) reliable in the environments for which they were ‘designed’, and (iii) the production of B accords with the implicit conceptions of the concepts making B up.
In certain cases, perhaps involving simple logical inferences, the innate cognitive faculties would themselves encode the relevant implicit conceptions, and so clause (iii) would be redundant. But in more advanced situations, where we think about electrons or groups, clause (iii) would come into play. As far as I can tell, this proposal would allow for justified beliefs in demon worlds. For example, if an agent was in a world where groups had been destroyed (if we can make sense of that metaphysical possibility), her group beliefs could still be justified. In fact, the main objection that I foresee to this proposal is that it makes justification too easy, but presumably that is also an objection that the proper function proposal faces for analogous cases.

Wednesday, July 30, 2008

Mathematics and Proper Function

I just finished reading fellow Purdue philosopher Michael Bergmann’s Justification Without Awareness, and would certainly want to echo Fumerton’s comment that “It is one of the best books in epistemology that I have read over the past couple of decades and it is a must read for anyone seriously interested in the fundamental metaepistemological debates that dominate contemporary epistemology.”

The central positive claim of the book is that the justification of agent’s beliefs should be characterized in terms of proper function rather than reliability:
Jpf: S’s belief B is justified iff (i) S does not take B to be defeated and (ii) the cognitive faculties producing B are (a) functioning properly, (b) truth-aimed and (c) reliable in the environments for which they were ‘designed’ (133).
The main advantage of this proper function approach over a simple focus on reliability is that a person in a demon world can still have justified perceptual beliefs. This is because their faculties were ‘designed’ for a normal world and they can still function properly in the demon world where they are not reliable.

Still, it is not clear how the proper function approach marks an improvement when it comes to the justification of mathematical beliefs. This is not Bergmann’s focus, but I take it that a problem here would call into question the account. To see the problem, consider a naturalized approach to proper function that aims to account for the design of the cognitive faculties of humans in terms of evolutionary processes. I can see how this might work for some simple mathematical beliefs, e.g. addition and subtraction of small numbers. But when it comes to the mathematics that we actually take ourselves to know, it is hard to see how a faculty could have evolved whose proper function would be responsible for these mathematical beliefs. The justification of our beliefs in axioms of advanced mathematics does not seem to work the same way as the justification of the mathematical beliefs that might confer some evolutionary advantage. If that’s right, then the proper function account might posit a new faculty for advanced mathematics. But it’s hard to see how such a faculty could have evolved. Another approach would be to side with Quine and insist that our justified mathematical beliefs are justified in virtue of their contribution to our justified scientific theories. The standard objection to this approach is that it does not accord with how mathematicians actually justify their beliefs. More to the point, it is hard to see how the cognitive faculties necessary for the evaluation of these whole theories could have evolved either.

A theist sympathetic to the proper function approach might take the failure of naturalistic proper functions as further support for their theism. But if that’s the case, then the claim that proper function approaches are consistent with some versions of naturalism, defended in section 5.3.1, needs to be further qualified.

Monday, July 28, 2008

Science and the A Priori

With some trepidation I have posted a draft of my paper "A Priori Contributions to Scientific Knowledge". The basic claim is that two kinds of a priori entitlement are needed to ground scientific knowledge. I find one kind, called "formal", in the conditions on concept possession, and so here I largely follow Peacocke. For the other kind, called "material", I draw on Friedman's work on the relative a priori.

Even those not interested in the a priori might gain something from the brief case study of the Crowe et. al. paper "A Direct Empirical Proof of the Existence of Dark Matter". What is intriguing to me about this case is that the "proof" works for a wide variety of "constitutive frameworks" or "scientific paradigms" in addition to the general theory of relativity. I would suggest that this undermines the claim that such frameworks are responsible for the meaning of the empirical claims, such as "Dark matter exists", but I would still grant them a role in the confirmation of the claims.

Update (July 10, 2009): I have removed this paper for substantial revisions.

Saturday, July 26, 2008

"Reactionary Nostalgia"

A colleague recently drew my attention to this 2006 essay by Levitt on Steve Fuller. Levitt assails Fuller for his sympathies with Intelligent Design (ID) and concludes by trying to link social constructivism with the reactionary politics of the advocates of ID:
I want to explore the possibility that their deepest guiding impulses don't derive from an intellectual conversion to social constructivist theory, but rather from a profound and rather frantic discontent with the world-view science forces them to confront. Most of the visitors to this site have accepted that view to a great degree, regarding the knowledge of the natural world that science affords and the consistency of its knowable laws as adequate consolation for the eclipse of a vision of the universe as governed by a divine purpose, moral equality, and ultimate justice ... ... I think that the persistent popularity of the notion that science is a historically contingent social construct, a narrative not necessarily superior to other accounts of the world, a kind of cognitive imperialism devised by the western ruling caste to humble and demoralize subaltern cultures, stems not from the philosophical plausibility of social constructivism as such, but rather from the deep discontent with the death of teleology to which I have alluded.
It would be interesting to try to trace out of this line of thinking more generally, although hopefully with not such a polemical aim. For it seems that much of the resistance to the rise of analytic philosophy in the 1960s also stemmed from the desire to retain a central role for philosophy in spiritual and political arenas. If such a pattern could be uncovered through more sustained historical research, we might finally see how misguided some alternative histories of analytic philosophy, like McCumber's, really are.

Wednesday, July 23, 2008

Multiscale Modeling

Most discussions of modeling in the philosophy of science consider the relationship between a single model, picked out by a set of equations, and a physical system. While this is appropriate for many purposes, there are also modeling contexts in which the challenge is to relate several different kinds of models to a system. One such case can be grouped under the heading of 'multiscale modeling'. Multiscale modeling involves considering two or more models that represent a system at different scales. An intuitive example is a model that represents basic particle-to-particle interactions and a continuum model involving larger scale variables for things like pressure and temperature.

In my own work on multiscale modeling I had always assumed that the larger-scale, macro models would be more accessible, and that the challenge lay in seeing how successful macro models relate to underlying micro models. From this perspective, Batterman's work shows how certain macro models of a system can be vindicated without the need of a micro model for that system.

It seems, though, that applied mathematicians have also developed techniques for working exclusively with a micro model due to the intractable nature of some macro modeling tasks. The recently posted article by E and Vanden-Eijnden, "Some Critical Issues for the 'Equation-free' Approach to Multiscale Modeling", challenges one such technique. As developed by Kevrekidis, Gear, Hummer and others, the equation-free approach aims to model the macro evolution of a system using only the micro model and its equations:
We assume that we do not know how to write simple model equations at the right macroscopic scale for their collective, coarse grained behavior. We will argue that, in many cases, the derivation of macroscopic equations can be circumvented: by using short bursts of appropriately initialized microscopic simulation, one can effectively solve the macroscopic equations without ever writing them down, and build a direct bridge between microscopic simulation and traditional continuum numerical analysis. It is, thus, possible to enable microscopic simulators to directly perform macroscopic systems level tasks (1347).
At an intuitive level, the techniques involve using a sample of microscopic calculations to estimate the development of the system at the macroscopic level. E and Vanden-Eijnden question both the novelty of this approach and its application to simple sorts of problems. One challenge is that the restriction to the micro level may not be any more tractable than a brute force numerical solution to the original macro level problem.

Sunday, July 20, 2008

PSA 2008 Schedule

I just noticed the draft schedule for the 2008 Philosophy of Science Association meeting. A highlight for me is, unsurprisingly, the symposium that I am in on "Applied Mathematics and Philosophy of Science". The session will have papers by Batterman, Psillos and Mark Wilson and myself centered around the theme of the importance of thinking about the contributions that mathematics itself makes to the various aspects of science that philosophers focus on.

The program looks very broad and continues with many papers on newer topics. These include (i) the debates about representation and (ii) social/political roles for science and philosophy of science. Surprisingly, there is also (iii) what could be broadly called the philosophy of meteorology. Do two symposiums in this area, "Evidence, Uncertainty, and Risk: Challenges of Climate Science", "Analyzing Climate Science", make a new trend?

Saturday, July 19, 2008

Aya Sofia Dome



One of the highlights of my recent trip to Turkey was the chance to see the spectacular Aya Sofia in Istanbul. My tour book explains that
The present structure [from 537 AD], whose dome has inspired religious architecture ever since, was designed by two Greek mathematicians -- Anthenius of Tralles and his assistant Isidorus of Miletus, who were able to apply to architecture the enormous strides that had recently been made in geometry. Unfortunately, part of the dome collapsed during an earthquake a mere 21 years later, revealing a fault in the original plans -- the dome was too shallow and the buttressing insufficient.
Here we have a potential instance of a pattern common from the use of mathematics in engineering and design: a mathematical theory fails to be adjusted to the materials or circumstances of application. Exactly what went wrong is unclear, but it is important to keep in mind how rare successful applications of mathematical theories really are.

Monday, July 7, 2008

New Book: The Philosophy of Mathematical Practice

The Philosophy of Mathematical Practice, edited by Mancosu, is now out, and I am hopeful that it will push the much-discussed turn to practice in new and exciting directions. In his introduction Mancosu helpfully situates the volume by saying "What is distinctive of this volume is that we integrate local studies with general philosophy of mathematics, contra Corfield, and we also keep traditional ontological and epistemological topics in play, contra Maddy". Here we have an approach to practice that finds philosophical issues arising from within mathematical practice, as was ably demonstrated in Mancosu's earlier book on the 17th century.

For me, the highlight of the volume is the excellent essay by Urquhart on how developments in physics have been "assimilated" into mathematics. This assimilation is not limited to putting the mathematics on a more rigorous foundation (or foundations, as several rigorizations are often possible), but also has led to new mathematics of intrinsic mathematical importance. As Urquhart puts it,
The common feature of the examples of the Dirac delta function, infinitesimals, and the umbral calculus is that the explications given for the anomalous objects and reasoning patterns involving them is what may be described as pushing down higher order objects. In other words, we take higher order objects, existing higher up in the type hierarchy, and promote them to new objects on the bottom level. This general pattern describes an enormous number of constructions.
The essay finishes with an intriguing case of "spin glasses" where the so-far unrigorized "replica method" has proven extremely successful.

Friday, July 4, 2008

Applied Group Theory

Soul Physics draws attention to a recent, apparently successful, application of group theory. The goal is to model the spine and spinal injuries. While the case is introduced as one that is "unreasonable effective", I am not sure how surprising this sort of application really is. Spatial rotations, after all, are what groups are good at modelling. Maybe a review of the details of the case will reveal what is unexpected.

Wednesday, July 2, 2008

Intuition of Quasi-Concrete Objects

In his intriguing discussion of our intuition of quasi-concrete objects Parsons focuses on a series of stroke-inscriptions (familiar from Hilbert) and their respective types. For Parsons, a perception or imagining of the inscription is not sufficient for an intuition of the type:
I do not want to say, however, that seeing a stroke-inscription necessarily counts as intuition of the type it instantiates. One has to approach it with the concept of the type; first of all to have the capacity to recognize other situations either as presenting the same type, or a different one, or as not presenting a string of this language at all. But for intuiting a type something more than mere capacity is involved, which, at least in the case of a real inscription, could be described as seeing something as the type (165).
Again, later Parons says that "intuition of an abstract object requires a certain conceptualization brought to the situation by the subject" (179).

Unfortunately, Parsons says little about what this concept is or what role it plays in the intuition of the type. The risk, to my mind, is that a clarification of the role for this concept might make the perception or imagination of the token irrelevant to the intuition. If, for example, we think of concepts along Peacocke's lines, then possessing the concept of a type is sufficient to think about the type. Some might think that acquiring the concept requires the perception or imagination of the token, but Parsons says nothing to suggest this. I would like to think this has something to do with his view that the token is an intrinsic representation of the type, but the connection is far from clear to me.

(Logic Matters has an extended discussion of the Parsons book as well.)

Monday, June 30, 2008

Quasi-Concrete Objects

One of the central claims of Charles Parsons' remarkable new book Mathematical Thought and Its Objects is that abstract objects come in two flavors: pure and quasi-concrete. Independently of the epistemological consequences of this division, we can ask how cogent and well-motivated it really is. Abstract objects are introduced using the standard negative tests: "an object is abstract if it is not located in space and time and does not stand in causal relations" (1). Quasi-concrete objects are abstract objects that have an additional feature:
What makes an object quasi-concrete is that it is of a kind which goes with an intrinsic, concrete "representation," such that different objects of the kind in question are distinguishable by having different representations (34).
A central example of this sort of objects is expression-types. Each token represents a type and we individuate a given type by the tokens that represent it. It is clear how the token is concrete and also fairly clear how its representation of the type is intrinsic. One could say that the token stands in the relation it does to the type solely in virture of its intrinsic features.

Still, the situation is less clear with a second central case:
Although sets are in general not quasi-concrete, it does seem that sets of concrete objects should count as such; here the relation of representation would be just membership (35).
The first objection that Parsons notes to this proposal is that the representation relationship is too different because "one element can hardly represent the set as a whole". But it seems to me that a more serious objection focuses on the intrinsic nature of the membership relation. For concrete objects do not stand in any intrinsic relationship to sets. That is, a concrete object is not the member of a set solely in virtue of its intrinsic features. If we drop this intrinsic-ness test, then the motivation for carving out the quasi-concrete objects escapes me.

It is true that an impure set stands it an essential relation to its members, and so we might say that it also stands in an intrinsic relation to its members. But this it to reverse the direction of representation that Parsons originally invokes.

Friday, June 27, 2008

Kinds of A Priori Justification

In A Priori Justification, Casullo presents a minimal conception of the a priori as simply nonexperiential justification. This, in turn, is explained negatively as justification that does not arise due to the operation of the five senses. This clearly leaves open the possibility that there are several different kinds of a priori justification, but nearly all advocates for the a priori that I can find seem to assume that there must be a single, unified source.

A notable exception is Pap's 1944 article (Phil. Rev. 53: 465-484), although he weakens the plausibility of his three-fold distinction by concluding that the different kinds of a priori can easily intermingle.

Why not take a harder line and insist that there are different kinds of nonexperiential justification? One thought would be that some a priori justification is absolute because it is tied to conditions on concept possession, as with Peacocke, while some a priori justification is relative because it is tied to constitutive frameworks, as with Michael Friedman. Maybe this is the best way for the defender of the a priori to take on the radical empiricist.

Monday, June 23, 2008

Do We Know the Cause of Aerodynamic Lift?

We are all familiar with the upward force experienced by an airplane as it travels down the runway and through the air. But it is not entirely clear if we know the cause of this lift. Let's distinguish three tests for knowing the cause of some phenomenon:

(1) We can bring the phenomenon about with regularity and in a wide variety of circumstances.
(2) We have a scientific model which allows us to predict that the phenomenon will occur in these circumstances.
(3) We have a scientific model which includes accurate representations of the fundamental physical processes responsible for the phenomenon in these circumstances and this model allows us to predict that the phenomenon will occur in these circumstances.

(1) and (2) seem to me to be inadequate unless we adopt a non-standard account of causation. If I understand McCabe correctly, then he is arguing that we lack (3). This is because we believe that particle-to-particle interactions are the fundamental physical processes responsible for the generation of lift, but none of the models that we can work with accurately represent these processes.

As McCabe and Flatow explain, the common explanation in terms of a difference in pressure, known as Bernoulli's principle, fails. What is less clear is how the textbook explanation in terms of circulation relates to particle-to-particle interactions and how tenuous this relation can be consistent with a claim to know the cause.

Friday, June 20, 2008

The Philosophy of Applied Mathematics?


Here I try to make a start on what the philosophy of applied mathematics might look like. I present it as a potential development from the turn to mathematical practice, familiar from Maddy and Corfield. Then I turn to the extended example of Prandtl's boundary layer theory before concluding with some potential epistemic, semantic and metaphysical implications for the philosophy of mathematics as a whole.

Suggestions for improvement or pointers to similar work by others is appreciated.


Wednesday, June 18, 2008

Vienna and the Vienna Circle


Here are the slides from a presentation I gave in April to the European Studies group at Purdue. The basic point is that a study of the historical context of a philosopher can be essential to understanding the content of their position. One can concede this role to context without thinking that the correctness of one's philosophical views are determined by the context. Extended debate about the importance of context for the history of philosophy has developed out of Soames' book. See here for Soames' various replies to criticisms.

Monday, June 16, 2008

Nominalistic Content

In "A Role for Mathematics in Phyiscal Theories" I argued that fictionalists have a problem when it comes to specifying the nominalistic content of our best scientific theories. David Liggins recently drew my attention to Gideon Rosen's explanation of nominalistic adequacy in his 2001 article "Nominalism, Naturalism, Epistemic Relativism". Does this approach block my argument?

Here is Rosen's account:
Let's say the concrete content of a world W is the largest wholly concrete part of W: the aggregate of all of the concrete objects that exist in W ... S is nominalistically adequate iff the concrete core of the actual world is an exact intrinsic duplicate of the concrete core of some world at which S is true -- that is, just in case things are in all concrete respects as if S were true (p. 75).
Here is a summary of my challenge to fictionalism: (i) the fictionalist must present something like Field's axioms if he is to explain which parts of the full content get into the nominalistic content. But (ii) giving these axioms would involve taking a stand on features of the concrete world that went beyond our evidence for the mathematical scientific theory. So, (iii) there was no epistemically responsible way for the fictionalist to specify how the nominalistic content differed from the full content.

A fictionalist might agree with the demand in (i), but think that Rosen's approach resolves the issue without appealing to Field-style axioms. I am not sure how this will work, though. If we use the real numbers to represent temperature, how does Rosen's test apply? For example, suppose we consider a law about thermal expansion. If that is part of my theory, what does it mean to say that the law is nominalistically adequate? Let's take two potential things that may or may not get in there: (a) instantiated temperatures are dense, (b) there is no lowest temperture. Both of these can be expressed in a nominalistic language provided we have Field's temperature predicates around. So, I think they are about the concrete world and should be relevant to nominalistic content.

Now I suggest that even if we accept Rosen's test, this is no help in resovling the question of the nominalistic adequacy of the law. I do not know if (a) and (b) are part of the nominalistic content of the law or how this is determined. This is the sense in which the commitments are indeterminate for the fictionalist. (I am not saying I explained this very well in the paper, but this is at least how I am thinking about it now.)

Suppose a fictionalist responded that whatever indeterminacy there is for the nominalistic content also arises for the full content. So, there is nothing here to tell against fictionalism and in favor of some kind of realism. My view is that a realist who can accept the mapping account can specify the full content with reference to these mappings. For this law, it would be something like "For any iron bar, if the temperature were to be increased by amount t, then the length of the bar would increase by alpha * t". Here the antecedent and the consequent involve mappings between objects with physical properties and mathematical objects. This clearly does not require (a) or (b). So, because we can apeal to mappings or relationships between physical properties and mathematical objects, we can resolve some apparent indeterminacies in the full content.

Why can't the fictionalist say the same thing? Maybe he can, but it seems that no fictionalists have explained how this would work beyond some toy examples involving counting. So, maybe the best way to see my discussion is as a challenge to the fictionalist to explain how she can match the realist in giving determinate contents to our scientific statements and theories. On my story, the commitment to realism comes in explaining the full content. The fictionalist either needs to bring in this explanation or else directly specify the nominalistic content by other means. I have not shown that both of these strategies are hopeless, but I think the burden is on the fictionalist to work it out.

Another reply is that the fictionalist need not satisfy my demand to explain how the full content relates to the nominalistic content or to clarify the nominalistic content directly. This is the reply I try to deal with in the article by saying that the fictionalist must explain what he is committing himself to in accepting a given statement or theory. Otherwise, he is not facing up to Quine's challenge on ontological commitment.

Sunday, June 15, 2008

Welcome!

This is a new philosophy blog, with a focus on philosophy of mathematics, philosophy of science and the history of analytic philosophy. I will look to post links of interest to people working in these areas along with updates on my own work.