Wednesday, July 30, 2008

Mathematics and Proper Function

I just finished reading fellow Purdue philosopher Michael Bergmann’s Justification Without Awareness, and would certainly want to echo Fumerton’s comment that “It is one of the best books in epistemology that I have read over the past couple of decades and it is a must read for anyone seriously interested in the fundamental metaepistemological debates that dominate contemporary epistemology.”

The central positive claim of the book is that the justification of agent’s beliefs should be characterized in terms of proper function rather than reliability:
Jpf: S’s belief B is justified iff (i) S does not take B to be defeated and (ii) the cognitive faculties producing B are (a) functioning properly, (b) truth-aimed and (c) reliable in the environments for which they were ‘designed’ (133).
The main advantage of this proper function approach over a simple focus on reliability is that a person in a demon world can still have justified perceptual beliefs. This is because their faculties were ‘designed’ for a normal world and they can still function properly in the demon world where they are not reliable.

Still, it is not clear how the proper function approach marks an improvement when it comes to the justification of mathematical beliefs. This is not Bergmann’s focus, but I take it that a problem here would call into question the account. To see the problem, consider a naturalized approach to proper function that aims to account for the design of the cognitive faculties of humans in terms of evolutionary processes. I can see how this might work for some simple mathematical beliefs, e.g. addition and subtraction of small numbers. But when it comes to the mathematics that we actually take ourselves to know, it is hard to see how a faculty could have evolved whose proper function would be responsible for these mathematical beliefs. The justification of our beliefs in axioms of advanced mathematics does not seem to work the same way as the justification of the mathematical beliefs that might confer some evolutionary advantage. If that’s right, then the proper function account might posit a new faculty for advanced mathematics. But it’s hard to see how such a faculty could have evolved. Another approach would be to side with Quine and insist that our justified mathematical beliefs are justified in virtue of their contribution to our justified scientific theories. The standard objection to this approach is that it does not accord with how mathematicians actually justify their beliefs. More to the point, it is hard to see how the cognitive faculties necessary for the evaluation of these whole theories could have evolved either.

A theist sympathetic to the proper function approach might take the failure of naturalistic proper functions as further support for their theism. But if that’s the case, then the claim that proper function approaches are consistent with some versions of naturalism, defended in section 5.3.1, needs to be further qualified.

Monday, July 28, 2008

Science and the A Priori

With some trepidation I have posted a draft of my paper "A Priori Contributions to Scientific Knowledge". The basic claim is that two kinds of a priori entitlement are needed to ground scientific knowledge. I find one kind, called "formal", in the conditions on concept possession, and so here I largely follow Peacocke. For the other kind, called "material", I draw on Friedman's work on the relative a priori.

Even those not interested in the a priori might gain something from the brief case study of the Crowe et. al. paper "A Direct Empirical Proof of the Existence of Dark Matter". What is intriguing to me about this case is that the "proof" works for a wide variety of "constitutive frameworks" or "scientific paradigms" in addition to the general theory of relativity. I would suggest that this undermines the claim that such frameworks are responsible for the meaning of the empirical claims, such as "Dark matter exists", but I would still grant them a role in the confirmation of the claims.

Update (July 10, 2009): I have removed this paper for substantial revisions.

Saturday, July 26, 2008

"Reactionary Nostalgia"

A colleague recently drew my attention to this 2006 essay by Levitt on Steve Fuller. Levitt assails Fuller for his sympathies with Intelligent Design (ID) and concludes by trying to link social constructivism with the reactionary politics of the advocates of ID:
I want to explore the possibility that their deepest guiding impulses don't derive from an intellectual conversion to social constructivist theory, but rather from a profound and rather frantic discontent with the world-view science forces them to confront. Most of the visitors to this site have accepted that view to a great degree, regarding the knowledge of the natural world that science affords and the consistency of its knowable laws as adequate consolation for the eclipse of a vision of the universe as governed by a divine purpose, moral equality, and ultimate justice ... ... I think that the persistent popularity of the notion that science is a historically contingent social construct, a narrative not necessarily superior to other accounts of the world, a kind of cognitive imperialism devised by the western ruling caste to humble and demoralize subaltern cultures, stems not from the philosophical plausibility of social constructivism as such, but rather from the deep discontent with the death of teleology to which I have alluded.
It would be interesting to try to trace out of this line of thinking more generally, although hopefully with not such a polemical aim. For it seems that much of the resistance to the rise of analytic philosophy in the 1960s also stemmed from the desire to retain a central role for philosophy in spiritual and political arenas. If such a pattern could be uncovered through more sustained historical research, we might finally see how misguided some alternative histories of analytic philosophy, like McCumber's, really are.

Wednesday, July 23, 2008

Multiscale Modeling

Most discussions of modeling in the philosophy of science consider the relationship between a single model, picked out by a set of equations, and a physical system. While this is appropriate for many purposes, there are also modeling contexts in which the challenge is to relate several different kinds of models to a system. One such case can be grouped under the heading of 'multiscale modeling'. Multiscale modeling involves considering two or more models that represent a system at different scales. An intuitive example is a model that represents basic particle-to-particle interactions and a continuum model involving larger scale variables for things like pressure and temperature.

In my own work on multiscale modeling I had always assumed that the larger-scale, macro models would be more accessible, and that the challenge lay in seeing how successful macro models relate to underlying micro models. From this perspective, Batterman's work shows how certain macro models of a system can be vindicated without the need of a micro model for that system.

It seems, though, that applied mathematicians have also developed techniques for working exclusively with a micro model due to the intractable nature of some macro modeling tasks. The recently posted article by E and Vanden-Eijnden, "Some Critical Issues for the 'Equation-free' Approach to Multiscale Modeling", challenges one such technique. As developed by Kevrekidis, Gear, Hummer and others, the equation-free approach aims to model the macro evolution of a system using only the micro model and its equations:
We assume that we do not know how to write simple model equations at the right macroscopic scale for their collective, coarse grained behavior. We will argue that, in many cases, the derivation of macroscopic equations can be circumvented: by using short bursts of appropriately initialized microscopic simulation, one can effectively solve the macroscopic equations without ever writing them down, and build a direct bridge between microscopic simulation and traditional continuum numerical analysis. It is, thus, possible to enable microscopic simulators to directly perform macroscopic systems level tasks (1347).
At an intuitive level, the techniques involve using a sample of microscopic calculations to estimate the development of the system at the macroscopic level. E and Vanden-Eijnden question both the novelty of this approach and its application to simple sorts of problems. One challenge is that the restriction to the micro level may not be any more tractable than a brute force numerical solution to the original macro level problem.

Sunday, July 20, 2008

PSA 2008 Schedule

I just noticed the draft schedule for the 2008 Philosophy of Science Association meeting. A highlight for me is, unsurprisingly, the symposium that I am in on "Applied Mathematics and Philosophy of Science". The session will have papers by Batterman, Psillos and Mark Wilson and myself centered around the theme of the importance of thinking about the contributions that mathematics itself makes to the various aspects of science that philosophers focus on.

The program looks very broad and continues with many papers on newer topics. These include (i) the debates about representation and (ii) social/political roles for science and philosophy of science. Surprisingly, there is also (iii) what could be broadly called the philosophy of meteorology. Do two symposiums in this area, "Evidence, Uncertainty, and Risk: Challenges of Climate Science", "Analyzing Climate Science", make a new trend?

Saturday, July 19, 2008

Aya Sofia Dome



One of the highlights of my recent trip to Turkey was the chance to see the spectacular Aya Sofia in Istanbul. My tour book explains that
The present structure [from 537 AD], whose dome has inspired religious architecture ever since, was designed by two Greek mathematicians -- Anthenius of Tralles and his assistant Isidorus of Miletus, who were able to apply to architecture the enormous strides that had recently been made in geometry. Unfortunately, part of the dome collapsed during an earthquake a mere 21 years later, revealing a fault in the original plans -- the dome was too shallow and the buttressing insufficient.
Here we have a potential instance of a pattern common from the use of mathematics in engineering and design: a mathematical theory fails to be adjusted to the materials or circumstances of application. Exactly what went wrong is unclear, but it is important to keep in mind how rare successful applications of mathematical theories really are.

Monday, July 7, 2008

New Book: The Philosophy of Mathematical Practice

The Philosophy of Mathematical Practice, edited by Mancosu, is now out, and I am hopeful that it will push the much-discussed turn to practice in new and exciting directions. In his introduction Mancosu helpfully situates the volume by saying "What is distinctive of this volume is that we integrate local studies with general philosophy of mathematics, contra Corfield, and we also keep traditional ontological and epistemological topics in play, contra Maddy". Here we have an approach to practice that finds philosophical issues arising from within mathematical practice, as was ably demonstrated in Mancosu's earlier book on the 17th century.

For me, the highlight of the volume is the excellent essay by Urquhart on how developments in physics have been "assimilated" into mathematics. This assimilation is not limited to putting the mathematics on a more rigorous foundation (or foundations, as several rigorizations are often possible), but also has led to new mathematics of intrinsic mathematical importance. As Urquhart puts it,
The common feature of the examples of the Dirac delta function, infinitesimals, and the umbral calculus is that the explications given for the anomalous objects and reasoning patterns involving them is what may be described as pushing down higher order objects. In other words, we take higher order objects, existing higher up in the type hierarchy, and promote them to new objects on the bottom level. This general pattern describes an enormous number of constructions.
The essay finishes with an intriguing case of "spin glasses" where the so-far unrigorized "replica method" has proven extremely successful.

Friday, July 4, 2008

Applied Group Theory

Soul Physics draws attention to a recent, apparently successful, application of group theory. The goal is to model the spine and spinal injuries. While the case is introduced as one that is "unreasonable effective", I am not sure how surprising this sort of application really is. Spatial rotations, after all, are what groups are good at modelling. Maybe a review of the details of the case will reveal what is unexpected.

Wednesday, July 2, 2008

Intuition of Quasi-Concrete Objects

In his intriguing discussion of our intuition of quasi-concrete objects Parsons focuses on a series of stroke-inscriptions (familiar from Hilbert) and their respective types. For Parsons, a perception or imagining of the inscription is not sufficient for an intuition of the type:
I do not want to say, however, that seeing a stroke-inscription necessarily counts as intuition of the type it instantiates. One has to approach it with the concept of the type; first of all to have the capacity to recognize other situations either as presenting the same type, or a different one, or as not presenting a string of this language at all. But for intuiting a type something more than mere capacity is involved, which, at least in the case of a real inscription, could be described as seeing something as the type (165).
Again, later Parons says that "intuition of an abstract object requires a certain conceptualization brought to the situation by the subject" (179).

Unfortunately, Parsons says little about what this concept is or what role it plays in the intuition of the type. The risk, to my mind, is that a clarification of the role for this concept might make the perception or imagination of the token irrelevant to the intuition. If, for example, we think of concepts along Peacocke's lines, then possessing the concept of a type is sufficient to think about the type. Some might think that acquiring the concept requires the perception or imagination of the token, but Parsons says nothing to suggest this. I would like to think this has something to do with his view that the token is an intrinsic representation of the type, but the connection is far from clear to me.

(Logic Matters has an extended discussion of the Parsons book as well.)