Sunday, August 31, 2008

Book Project: Mathematics and Scientific Representation

This year I will be trying to come up with a draft of a book that I have been planning for some time, but that I have been quite unsure how to organize. The general issue is the prevalence of mathematics in science and whether there is a philosophical problem lurking here that can be productively discussed. My current angle of attack is to focus on the contribution of this or that part of mathematics to a particular scientific representation. ("Representation" is meant to include both theories and models.) So, we can ask for a given physical situation, context, and mathematical representation of the situation, (i) what does the mathematics contribute to the representation, (ii) how does it make this contribution and (iii) what must be in place for this contribution to occur?

To avoid devolving into a list of examples, I am also trying to come up with different sorts of representations and different ways that mathematics might contribute. At the moment these are: (i) the math is intrinsic/extrinsic to the content of the representation, (ii) the representation is causal concrete or abstract acausal, (iii) the representation is concrete fixed (i.e. a fixed interpretation) or abstract varying, (iv) the scale of the representation and (v) the global (as in a constitutive framework) or local character of the representation. In future posts I will try to clarify these dimensions and offer examples of different ways in which mathematics contributes to them.

The main point of the book, though, is to argue that the contribution that mathematics makes in all these different kinds of cases can generally be classified as epistemic. That is, mathematics helps us to formulate representations that we can confirm given the data we can actually collect. So, there will be many issues related to confirmation and epistemology that I will try to explore here as well.

Pointers to similar projects or projects pursuing this issue in a different way are welcome!

Wednesday, August 27, 2008

Explaining Clumps via Transient Simulations

Following up the previous post on Batterman and mathematical explanation, here is a case where a mathematical explanation has been offered of a physical phenomenon and the explanation depends on not taking asymptotic limits. The phenomenon in question is the "clumping" of species around a given ecological niche. This is widely observed, but conflicts with equilibrium analyses of the relevant ecological models which instead predict a single species to occupy a single niche.

As Nee & Colgrave reported in 2006 (Nature. 441(7092):417-418), Scheffer & van Nes (DOI: 10.1073/pnas.0508024103) overcame this unsatisfactory state of affairs by running simulations that examine the long-term, but still transient, behavior of the same ecological models. This successfully reproduced the clumping observed in ecological systems:
Analytical work looks at the long-term equilibria of models, whereas a simulation study allows the system to be observed as it moves towards these equilibria ... The clumps they observe are transient, and each will ultimately be thinned out to a single species. But 'ultimately' can be a very long time indeed: we now know that transient phenomena can be very long-lasting, and hence, important in ecology, and such phenomena can be studied effectively only by simulation (417).
While the distinction between analysis and simulation seems to me to be a bit exaggerated, the basic point remains: we can sometimes explain natural phenomena using mathematics only by not taking limits. Limits can lead to misrepresentations just as much as any other mathematical technique. More to the point, explanatory power can arise from examining the non-limiting behavior of the system.

Friday, August 22, 2008

Batterman on "The Explanatory Role of Mathematics in Empirical Science"

Batterman has posted a draft tackling the problem of how mathematical explanations can provide insight into physical situations. Building on his earlier work, he emphasizes cases of asymptotic explanation where a mathematical equation is transformed by taking limits of one or more quantities, e.g. to 0 or to infinity. A case that has received much discussion (see the comment by Callender in SHPMP) is the use of the “thermodynamic limit” of infinitely many particles in accounting for phase transitions. In this paper Batterman argues that “mapping” accounts of how mathematics is applied, presented by me as well as (in a different way) Bueno & Colyvan, are unable to account for the explanatory contributions that mathematics makes in this sort of case.

I would like to draw attention to two claims. First, “most idealizations in applied mathematics can and should be understood as the result of taking mathematical limits” (p. 9). Second, the explanatory power of these idealizations is not amenable to treatment by mapping accounts because the limits involve singularities: “Nontraditional idealizations [i.e. those ignored by traditional accounts] cannot provide such a promissory background because the limits involved are singular” (p. 20). Batterman has made a good start in this paper arguing for the first claim. The argument starts from the idea that we want to explain regular and recurring phenomena. But if this is our goal, then we need to represent these phenomena in terms of what their various instantiations have in common. And it is a short step from this to the conclusion that what we are doing is representing the phenomena so that it is stable under a wide variety of perturbations of irrelevant detail. We can understand the technique of taking mathematical limits, then, as a fancy way of arriving at a representation of what we are interested in.

Still, I have yet to see any account of why we should expect the limits to involve singularities. Of course, Batterman’s examples do involve singularities, but why think that this is the normal situation? As Batterman himself explains, “A singular limit is one in which the behavior as one approaches the limit is qualitatively different from the behavior one would have at the limit”. For example, with the parameter “e”, the equation ex^2 – 2x – 2 = 0 has two roots for e ≠ 0, and one root for e = 0. So, the limit as e goes to 0 is singular. But the equation x^2 – e2x – 2 = 0 has a regular limit as e goes to 0 as the number of roots remains the same. So, the question remains: why would we expect the equations that appear in our explanations to result from singular, and not regular, limits?

Batterman makes a start on an answer to this as well, but as he (I think) recognizes, it remains incomplete. His idea seems to be that singular limits lead to changes in the qualitative behavior of the system and that in many/most cases our explanation is geared at this qualitative change. Still, just because singular limits are sufficient for qualitative change it does not follow that all or even most explanations of qualitative change will involve singular limits. Nevertheless, here is an important perspective on stability analysis that I hope he will continue to work out.

Saturday, August 16, 2008

Carroll & Albert on Many Worlds

Again following Leiter by a few days, there is an excellent 67-minute discussion of quantum mechanics between physicist Sean Carroll and philosopher of physics David Albert at bloggingheads. The first 41 minutes are an ideal introduction for people who have not been exposed to the issues before, but beginning here there is a more advanced investigation of the prospects for accounting for probabilities in many worlds.

Two moments to watch for: (i) Albert's remark that the only reason to write books is to regret writing them and (ii) Carroll's brief expressions of bemusement as Albert sharpens philosophical distinctions like reasons vs. causes.

Friday, August 15, 2008

Lyon & Colyvan on Phase Spaces

In their recent article “The Explanatory Power of Phase Spaces” Aidan Lyon and Mark Colyvan develop one of Malament’s early criticisms of Field’s program to provide nominalistic versions of our best scientific theories. Malament had pointed out that it was hard to see how Field’s appeal to space-time regions would help to nominalize applications of mathematics involving phase spaces. As only one point in a given phase space could be identified with the actual state of the system, some sort of modal element enters into phase space representations such as Hamiltonian dynamics where we consider non-actual paths. Lyon and Colyvan extend this point further by showing how the phase space representation allows explanations that are otherwise unavailable. They focus on the twin claims that
All galactic systems that can be modeled by the Henon-Heiles system with low energies tend to exhibit regular and predictable motion;
All galactic systems that can be modeled by the Henon-Heiles system with high energies tend to exhibit chaotic and unpredictable motion.
The mathematical explanation of these claims involves an analysis of the structure of the phase spaces of a Henon-Heiles system via Poincare maps. As the energy of such a system is increased, the structure changes and the system can be seen to become more chaotic.

For me, the central philosophical innovation of the paper is the focus on explanatory power, and the claim that even if a nominalistic theory can represent the phenomena in question, the nominalistic theory lacks the explanatory power of the mathematical theory. This is an intriguing claim which seems to me to be largely correct. Still, one would want to know what the source of the explanatory power really is. Lyon and Colyvan focus on the modal aspects of the representation, and claim that this is what would be missing from a nominalistic theory. But it seems that a Field-style theory would have similar problems handling cases of stability analysis where phase spaces are absent. For example, I have used the example of the Konigsberg bridges where the topology of the bridges renders certain sorts of paths impossible. There is of course a modal element in talking of impossible paths, but the non-actual paths are not part of the representation in the way that they appear in phase spaces. What the bridges have in common with this case is that a mathematical concept groups together an otherwise disparate collection of physical phenomena. While all these phenomena may be represented nominalistically, there is something missing from this highly disjunctive representation. I am not sure if what is lost is best characterized as explanatory power, but something is surely worse.

Three different elements come together, then, in Lyon and Colyvan’s case, and it is not clear which contribute to explanatory power: (i) non-actual trajectories in a phase space, (ii) a mathematical concept that groups together a variety of physical systems (“the Henon-Heiles system”) and (iii) stability analysis. Maybe they all make a contribution, but more examples are needed to see this.

Thursday, August 14, 2008

Sarkar, Fuller and Negative Reviews

NDPR recently published a review of Fuller's recent book by Sarker. As noted by Leiter, the review concludes with
These excursions into fancy allow me to end on a positive note: the lack of depth or insight in this book is more than compensated by the entertainment it provides, at least to a philosopher or historian of science. No one should begrudge us our simple pleasures. I'm happy to have read this book, and even more so not to have paid for it.
While entertaining, I wonder whether such negative attacks are really worthwhile. I myself have not been immune to the temptation to put down a book that I thought should never have been published, as with my remarks (also in NDPR) that
Its main virtue may be to stand as a cautionary example of how not to write a history of analytic philosophy.
But what do such reviews achieve? Perhaps the thought is that they send a message to the philosophical community that this book is not worth reading or even buying. Reviews, then, act as a kind of gatekeeper that warn off naive readers who might otherwise make the mistake of thinking the book is somehow onto something. But an equally effective means to achieve this end might just be to not review the book in the first place. NDPR reviews a lot of books, but even here there is no need to be comprehensive and the mere fact that a book is slated for review already confers on it some special status.

The problem, of course, is that most of us (including myself) agree to review a book before we read it and decide if it is worth reviewing. So, I propose that editors allow potential book reviewers an out: they can review a book in a given time-frame, or else decide not to submit a review by some deadline. For this to actually stop bad books from being reviewed, the editor would have to agree to abide by the "no review" verdict.

Sunday, August 10, 2008

Mathematics, Proper Function and Concepts

Continuing the previous post, I think what bothers me about the proper function approach is the picture of cognitive faculties as innate. This forces the naturalist to account for their origins in evolutionary terms, and this seems hard to do for the faculties that would be responsible for justified mathematical beliefs. A different approach, which should be compatible with the spirit of the proper function approach, would be to posit only a few minimally useful innate faculties along with the ability to acquire concepts. If we think of concepts as Peacocke and others do, then acquiring a mathematical concept requires tacitly recognizing certain principles involving that concept (an "implicit conception"). Then, if a thinker generated a mathematical belief involving that concept and followed the relevant principles, we could insist that the belief was justified.

This would lead to something like:
Jpfc: S’s belief B is justified iff (i) S does not take B to be defeated, (ii) the cognitive faculties producing B are (a) functioning properly, (b) truth-aimed and (c) reliable in the environments for which they were ‘designed’, and (iii) the production of B accords with the implicit conceptions of the concepts making B up.
In certain cases, perhaps involving simple logical inferences, the innate cognitive faculties would themselves encode the relevant implicit conceptions, and so clause (iii) would be redundant. But in more advanced situations, where we think about electrons or groups, clause (iii) would come into play. As far as I can tell, this proposal would allow for justified beliefs in demon worlds. For example, if an agent was in a world where groups had been destroyed (if we can make sense of that metaphysical possibility), her group beliefs could still be justified. In fact, the main objection that I foresee to this proposal is that it makes justification too easy, but presumably that is also an objection that the proper function proposal faces for analogous cases.