Showing posts with label applications. Show all posts
Showing posts with label applications. Show all posts

Wednesday, April 24, 2013

Six Papers in Mind About Mathematical Fictionalism

The October 2012 issue of Mind (posted today here) has an extended discussion section where mathematical fictionalists of various stripes respond to Colyvan's earlier article "There is No Easy Road to Nominalism". The discussion concludes with a detailed reply by Colyvan. While I am a fan of neither Colyvan's explanatory indispensability argument nor its fictionalist critics, I look forward to reading this discussion and engaging with it soon!

The contents:

Jody Azzouni Taking the Easy Road Out of Dodge Mind (2012) 121(484): 951-965

Otávio Bueno An Easy Road to Nominalism Mind (2012) 121(484): 967-982

Mary Leng Taking it Easy: A Response to Colyvan Mind (2012) 121(484): 983-995

David Liggins Weaseling and the Content of Science Mind (2012) 121(484): 997-1005

Stephen Yablo Explanation, Extrapolation, and Existence Mind (2012) 121(484): 1007-1029

Mark Colyvan Road Work Ahead: Heavy Machinery on the Easy Road Mind (2012) 121(484): 1031-1046

Thursday, April 11, 2013

E. O. Wilson on Science and Math

Prominent biologist and science writer E. O. Wilson has a provocative Wall Street Journal opinion piece about the link between mathematical ability and scientific achievement. Perhaps the central ambiguity of his argument is illustrated by the two different titles the article seems to have. The browser heading is "Great Scientists Don't Need Math", while the actual title is "Great Scientist Does not Equal Good at Math". While the latter claim is almost trivial, the former claim seems very contentious. Of course, I am biased on this issue, having written a book arguing that mathematics makes several crucial contributions to the formulation and justification of our scientific knowledge. But setting that philosophical discussion aside, it is somewhat disturbing to find such a simplistic view of the way mathematics helps in science being presented by such a distinguished scientist.

Wilson's basic idea is that great scientists don't need to be good at math because they can always call on specialists in the relevant areas of mathematics. On Wilson's picture, the great scientists come up with great ideas, and these ideas are then implemented and tested via mathematical models. But the ideas themselves are completely non-mathematical:

Fortunately, exceptional mathematical fluency is required in only a few disciplines, such as particle physics, astrophysics and information theory. Far more important throughout the rest of science is the ability to form concepts, during which the researcher conjures images and processes by intuition.
Everyone sometimes daydreams like a scientist. Ramped up and disciplined, fantasies are the fountainhead of all creative thinking. Newton dreamed, Darwin dreamed, you dream. The images evoked are at first vague. They may shift in form and fade in and out. They grow a bit firmer when sketched as diagrams on pads of paper, and they take on life as real examples are sought and found.
Pioneers in science only rarely make discoveries by extracting ideas from pure mathematics. Most of the stereotypical photographs of scientists studying rows of equations on a blackboard are instructors explaining discoveries already made. Real progress comes in the field writing notes, at the office amid a litter of doodled paper, in the hallway struggling to explain something to a friend, or eating lunch alone. Eureka moments require hard work. And focus.
Ideas in science emerge most readily when some part of the world is studied for its own sake. They follow from thorough, well-organized knowledge of all that is known or can be imagined of real entities and processes within that fragment of existence. When something new is encountered, the follow-up steps usually require mathematical and statistical methods to move the analysis forward. If that step proves too technically difficult for the person who made the discovery, a mathematician or statistician can be added as a collaborator.
Now, it is clear that some ideas that drive scientific discoveries are non-mathematical. But I do not see much evidence that most of these ideas are like that or that scientists should trust non-scientists to implement their ideas in mathematical terms. It is precisely at this stage that some of the most important and innovative work is done, and it is not clear to me how collaborations can work if one side, the scientist, doesn't understand what the other side, the mathematician, is doing.

See here for another critique of Wilson.

Thursday, July 14, 2011

The unplanned impact of mathematics (Nature)

Peter Rowlett has assembled, with some other historians of mathematics, seven accessible examples of how theoretical work in mathematics led to unexpected practical applications. His discussion seems to be primarily motivated by the recent emphasis on the "impact" of research, both in Britain and in the US:
There is no way to guarantee in advance what pure mathematics will later find application. We can only let the process of curiosity and abstraction take place, let mathematicians obsessively take results to their logical extremes, leaving relevance far behind, and wait to see which topics turn out to be extremely useful. If not, when the challenges of the future arrive, we won't have the right piece of seemingly pointless mathematics to hand.
For philosophers, the most important example to keep in mind, I think, is the last one, offered by Chris Linton: the role of Fourier series in promoting the later "rigorization" of math:
In the 1870s, Georg Cantor's first steps towards an abstract theory of sets came about through analysing how two functions with the same Fourier series could differ.
Rowlett has a call for more examples on the BSHM website. Hopefully this will convince some funding agencies that immediate impact is not a fair standard!

Sunday, November 14, 2010

Mathematics and Scientific Representation, Claim 1: Many Contributions

In the next few weeks, I hope to go through 12 of the key claims which I try to defend in my book manuscript. At its most general, the topic of the book is how mathematics helps in science. I assume to start that science is quite successful. This success is not limited to its ability to generate consensus amongst its practitioners, but extends to its predictions and contributions to technological innovations. I more or less assume some kind of scientific realism, then, although exactly how realist we should be is part of the discussion of the book.

So, what does mathematics contribute to the success of science? I argue that
1. A promising way to make sense of the way in which mathematics contributes to the success of science is by distinguishing several diff erent contributions.
Many philosophers seem to think that there is one thing which mathematics does. Perhaps the most influential view along these lines goes back (at least) to Wittgenstein's Tractatus:
In life it is never a mathematical proposition which we need, but we use mathematical propositions only in order to infer from propositions which do not belong to mathematics to others which equally do not belong to mathematics. (6.21)
But this seems too narrow. Mathematics makes any number of contributions to the success of science, and there is no straightforward way to reduce them all to a single kind.

The problems with Wittgenstein's approach are obvious. In many cases, we have no clue what the non-mathematical inputs or outputs are supposed to be. We start with mathematical descriptions and we end with equally mathematical descriptions. Either there is something defective in scientific practice, or Wittgenstein's approach is wrong. Beyond this sort of inferential or deductive contribution, there must be other kinds of contributions. But how are we to enumerate these contributions, and is there anything to be said about what they might have in common?

Monday, October 18, 2010

Workshop: The Role of Mathematics in Science

Readers of this blog in the Toronto area may want to check out a workshop this Friday at the University of Toronto, IHPST. It is on the role of mathematics in science and the speakers are me, Margaret Morrison (Toronto), Steven French (Leeds), Alex Koo (Toronto) and Alan Baker (Swarthmore). The program is online here.

Saturday, October 16, 2010

Mandelbrot (1924-2010)

The New York Times obituary gives a useful overview of his career and contributions to applications.

Wednesday, April 28, 2010

Mathematical Explanation in the NYRB

In his recent review of Dawkins' Oxford Book of Modern Science Writing Jeremy Bernstein characterizes one entry as follows:
W.D. Hamilton’s mathematical explanation of the tendency of animals to cluster when attacked by predators.
The article in question is "Geometry for the Selfish Herd", Journal of Theoretical Biology 31 (1971): 295-311. (Online here.) Given the ongoing worries about the existence and nature of mathematical explanations in science, it is worth asking what led Bernstein to characterize this explanation as mathematical?

The article summarizes two models of predation which are used to support the conclusion that the avoidance of predators "is an important factor in the gregarious tendencies of a very wide variety of animals" (p. 298). The first model considers a circular pond where frogs, the prey, are randomly scattered on the edge. The predator, a single snake, comes to the surface of the pond and strikes whichever frog is nearest. Hamilton introduces a notion of the domain of danger of a frog which is the part of the pond edge which would lead to the frog being attacked. Hamilton points out that the frogs can reduce their domains of danger by jumping together. In this diagram the black frog jumps between two other frogs:



So, "selfish avoidance of a predator can lead to aggregation."

In the slightly more realistic two-dimensional case Hamilton generalizes his domains of danger to polygons whose sides result from bisecting the lines which connect the prey:



Hamilton notes that it is not known what the general best strategy is here for a prey organism to minimize its domain of danger, but gives rough estimates to justify the conclusion that moving towards ones nearest neighbor is appropriate. This is motivated in part by the claim that "Since the average number of sides is six and triangles are rare (...), it must be a generally useful rule for a cow to approach its nearest neighbor."

So, we can explain the observed aggregation behavior using the ordinary notion of fitness and an appeal to natural selection. What is the mathematics doing here and why might we have some sort of specifically mathematical explanation? My suggestion is that the mathematical claim that strategy X minimizes (or reliably lowers) the domain of danger is a crucial part of the account. Believing this claim and seeing its relevance to the aggregation behavior is essential to having this explanation. Furthermore, this seems like a very good explanation. What implications this has for our mathematical beliefs remains, of course, a subject for debate.

Monday, April 12, 2010

New Entries in Internet Encyclopedia of Philosophy on the Philosophy of Mathematics

Under the editorial guidance of Roy Cook a number of new entries in philosophy of mathematics have appeared on the Internet Encyclopedia of Philosophy. As I understand it, the aim of this site is to present relatively short summaries which are accessible to a wider audience, esp. undergraduate students, than some other options.

Check out these recent entries:

Bolzano's Philosophy of Mathematical Knowledge (by Sandra Lapointe)

The Applicability of Mathematics (by me -- more shameless self-promotion!)

Mathematical Platonism (by Julian Cole)

Predicative and Impredicative Definitions (by Oystein Linnebo)


A list of the all of the philosophy of mathematics entries can be monitored here.

Monday, October 26, 2009

Mathematics and Scientific Representation: Summary and Chapter 1

More than a year ago I posted a fairly vague description of a book project on the ways in which mathematics contributes to the success of science. I have made some progress on bringing together this material and thought it would be useful to post a summary of the chapters of the book along with an introductory chapter where I give an overview of the main conclusions of the book. Hopefully this is useful to other people working on similar projects. Critical suggestions for what is missing or who else is doing similar stuff is of course welcome!

Update (Feb. 17, 2011): The link to chapter 1 has been replaced with the final version. The link to the summary has been removed.

Tuesday, October 6, 2009

Mathematics, Financial Economics and Failure

In a recent post I noted Krugman's point about economics being seduced by attractive mathematics. Since then there have been many debates out there in the blogosphere about the failures of financial economics, but little discussion of the details of any particular case. I want to start that here with a summary of how the most famous model in financial economics is derived. This is the Black-Scholes model, given as (*) below. It expresses the correct price V for an option as a function of the current price of the underlying stock S and the time t.

My derivation follows Almgren, R. (2002). Financial derivatives and partial differential equations. American Mathematical Monthly, 109: 1-12, 2002.

In my next post I aim to discuss the idealizations deployed here and how reasonable they make it to apply (*) in actual trading strategies.

A Derivation of the Black-Scholes Model

A (call) option gives the owner the right to buy some underlying asset like a stock at a fixed price K at some time T. Clearly some of the factors relevant to the fair price of the option now are the difference between the current price of the stock S and K as well as the length of time between now and time T when the option can be exercised. Suppose, for instance, that a stock is trading at 100$ and the option gives its owner the right to buy the stock at 90$. Then if the option can be exercised at that moment, the option is worth 10$. But if it is six months or a year until the option can be exercised, what is a fair price to pay for the 90$ option? It seems like a completely intractable problem that could depend on any number of factors including features specific to that asset as well as an investor's tolerance for risk. The genius of the Black-Scholes approach is to show how certain idealizing assumptions allow the option to be priced at V given only the current stock price S, a measure of the volatility of the stock price σ , the prevailing interest rate r and the length of time between now and time T when the option can be exercised. The only unknown parameter here is σ , the volatility of the stock price, but even this can be estimated by looking at the past behavior of the stock or similar stocks. Using the value V computed using this equation a trader can execute what appears to be a completely risk-free hedge. This involves either buying the option and selling the stock or selling the option and buying the stock. This position is apparently risk-free because the direction of the stock price is not part of the model, and so the trader need not take a stand on whether the stock price will go up or down.

The basic assumption underlying the derivation of (*) is that markets are efficient so that ``successive price changes may be considered as uncorrelated random variables" (Almgren, p. 1). The time-interval between now and the time T when the option can be exercised is first divided into N -many time-steps. We can then deploy a lognormal model of the change in price δ S_j at time-step j :

δ S_j = a δ t + σ S ξ_j

The ξ_j are random variables whose mean is zero and whose variance is 1 (Almgren, p. 5). Our model reflects the assumption that the percentage size of the random changes in S remains the same as S fluctuates over time (Almgren, p. 8). The parameter a indicates the overall ``drift" in the price of the stock, but it drops out in the course of the derivation.

Given that V is a function of both S and t we can approximate a change in V for a small time-step &delta t using a series expansion known as a Taylor series

δ V = V_t δ t + V_s δ S + 1/2 V_{SS} δ S^2

where additional higher-order terms are dropped. Given an interest rate of r for the assets held as cash, the corresponding change in the value of the replicating portfolio Π = DS+C of D stocks and C in cash is

δ Π = Dδ S + r C δ t

The last two equations allow us to easily represent the change in the value of a difference portfolio which buys the option and offers the replicating portfolio for sale. The change in value is

δ(V-Π)=(V_t - rC)δ t + (V_S - D)δ S + 1/2 V_{SS} δ S^2

The δ S term reflects the random fluctuations of the stock price and if it could not be dealt with we could not derive a useful equation for V . But fortunately the δ S term can be eliminated if we assume that at each time-step the investor can adjust the number of shares held so that

D=V_S

Then we get

δ(V-Π)=(V_t - rC)δ t + 1/2 V_{SS} δ S^2

The δ S^2 remains problematic for a given time-step, but we can find it for the sum of all the time-steps using our lognormal model. This permits us to simplify the equation so that, over the whole time interval Δ t ,

Δ(V-&Pi) = (V_t - rC + 1/2 σ^2 S^2 V_{SS})Δ t

Strictly speaking, we are here applying a result known as Ito's Lemma.

What is somewhat surprising is that we have found the net change in the value of the difference portfolio in a way that has dropped any reference to the random fluctuations of the stock price S . This allows us to deploy the efficient market hypothesis again and assume that Δ(V-Π) is identical to the result of investing V-Π in a risk-free bank account with interest rate r . That is,

Δ(V-Π) = r (V-Π)Δ t

But given that V-Π = V - DS - C and D = V_S , we can simplify the right-hand side of this equation to

(rV - rV_S S - rC)Δ t

Given our previous equation for the left-hand side, we get

(*) V_t + 1/2 σ^2 S^2 V_{SS} + rSV_S - rV = 0

after all terms are brought to the left-hand side.

Wednesday, September 30, 2009

Critical Notice of Mark Wilson's Wandering Significance

I have posted a long critical notice of Mark Wilson's amazing book Wandering Significance: An Essay on Conceptual Behavior. It will eventually appear in Philosophia Mathematica. My impression is that even though the book came out in 2006 and is now available in paperback, it has not really had the impact it should in debates about models and idealization. I think this is partly because the book addresses broad questions about concepts that don't often arise in philosophy of science or philosophy of mathematics. But if you start to read the book, it becomes immediately clear how important examples from science and mathematics are to Wilson's views of conceptual evaluation. So, I hope my review will help philosophers of science and mathematics see the importance of the book and the challenges it raises.

Saturday, July 25, 2009

The Honeycomb Conjecture (Cont.)

Following up my earlier post, and in line with Kenny’s perceptive comment, I wanted to raise two sorts of objections to the explanatory power of the Honeycomb Conjecture. I call them the problem of weaker alternatives and the bad company problem (in line with similar objections to neo-Fregeanism).

(i) Weaker alternatives: When a mathematical result is used to explain, there will often be a weaker mathematical result that seems to explain just as well. Often this weaker result will only contribute to the explanation if the non-mathematical assumptions are adjusted as well, but it is hard to know what is wrong with this. If this weaker alternative can be articulated, then it complicates the claim that a given mathematical explanation is the best explanation.

This is not just a vague possibility for the Honeycomb Conjecture case. As Hales relates
It was known to the Pythagoreans that only three regular polygons tile the plane: the triangle, the square, and the hexagon. Pappus states that if the same quantity of material is used for the constructions of these figures, it is the hexagon that will be able to hold more honey (Hales 2000, 448).
This suggests the following explanation of the hexagonal structure of the honeycomb:
(1) Biological constraints require that the bees tile their honeycomb with regular polygons without leaving gaps so that a given area is covered using the least perimeter.

(2) Pappus’ theorem: Any partition of the plane into regions of equal area using regular polygons has perimeter at least that of the regular hexagonal honeycomb tiling.
This theorem is much easier to prove and was known for a long time.

If this is a genuine problem, then it suggests an even weaker alternative which arguably deprives the explanation of its mathematical content:
(1) Biological constraints require that the bees tile their honeycomb with regular polygons without leaving gaps so that a given area is covered using the least perimeter.

(2’) Any honeycomb built using regular polygons has perimeter at least that of the regular hexagonal honeycomb tiling.
We could imagine supporting this claim using experiments with bees and careful measurements.

(ii) Bad company: If we accept the explanatory power of the Honeycomb Conjecture despite our uncertainty about its truth, then we should also accept the following explanation of the three-dimensional structure of the honeycomb. The honeycomb is built on the two-dimensional hexagonal pattern by placing the polyhedron given on the left of the picture both above and below the hexagon. The resulting polyhedron is called a rhombic dodecahedron.



So it seems like we can explain this by a parallel argument to the explanation of the two-dimensional case:
(1*) Biological constraints require that the bees build their honeycomb with polyhedra without leaving gaps so that a given volume is covered using the least surface area.

(2*) Claim: Any partition of a three-dimensional volume into regions of equal volume using polyhedra has surface area at least that of the rhombic dodecahedron pattern.
The problem is that claim (2*) is false. Hales points out that Toth showed that the figure on the right above is a counterexample, although “The most economical form has never been determined” (Hales 2000, 447).

This poses a serious problem to anyone who thinks that the explanatory power of the Honeycomb Conjecture is evidence for its truth. For in the closely analogous three-dimensional case, (2*) plays the same role, and yet is false.

My tentative conclusion is that both problems show that the bar should be set quite high before we either accept the explanatory power of a particular mathematical theorem or take this explanatory power to be evidence for its mathematical truth.

Thursday, July 23, 2009

What Follows From the Explanatory Power of the Honeycomb Conjecture?

Following up the intense discussion of an earlier post on Colyvan and mathematical explanation, I would like to discuss in more detail another example that has cropped up in two recent papers (Lyon and Colyvan 2008, Baker 2009). This is the Honeycomb Conjecture:
Any partition of the plane into regions of equal area has perimeter at least that of the regular hexagonal honeycomb tiling (Hales 2000, 449).
The tiling in question is just (Hales 2001, 1)



The Honeycomb Conjecture can be used to explain the way in which bees construct the honeycombs that they use to store honey. The basic idea of this explanation is that the bees which waste the minimum amount of material on the perimeters of the cells which cover a maximum surface area will be favored by natural selection. As Lyon and Colyvan put it:
Start with the question of why hive-bee honeycomb has a hexagonal structure. What needs explaining here is why the honeycomb is always divided up into hexagons and not some other polygon (such as triangles or squares), or any combination of different (concave or convex) polygons. Biologists assume that hivebees minimise the amount of wax they use to build their combs, since there is an evolutionary advantage in doing so. ... the biological part of the explanation is that those bees which minimise the amount of wax they use to build their combs tend to be selected over bees that waste energy by building combs with excessive amounts of wax. The mathematical part of the explanation then comes from what is known as the honeycomb conjecture: a hexagonal grid represents the best way to divide a surface into regions of equal area with the least total perimeter. … So the honeycomb conjecture (now the honeycomb theorem), coupled with the evolutionary part of the explanation, explains why the hive-bee divides the honeycomb up into hexagons rather than some other shape, and it is arguably our best explanation for this phenomenon (Lyon and Colyvan 2008, 228-229).
Lyon and Colyvan do not offer an account of how this conjecture explains, but we can see its explanatory power as deriving from its ability to link the biological goal of minimizing the use of wax with the mathematical feature of tiling a given surface area. It is thus very similar to Baker's periodic cicada case where the biological goal of minimizing encounters with predators and competing species is linked to the mathematical feature of being prime.

Baker uses the example to undermine Steiner’s account of mathematical explanation. For Steiner, a mathematical explanation of a physical phenomenon must become a mathematical explanation of a mathematical theorem when the physical interpretation is removed. But Baker notes that the Honeycome Conjecture wasn’t proven until 1999 and this failed to undermine the explanation of the structure of the bees’ hive (Baker 2009, 14).

So far, so good. But there are two interpretations of this case, only one of which fits with the use of this case in the service of an explanatory indispensability argument for mathematical platonism.
Scenario A: the biologists believe that the Honeycomb Conjecture is true and this is why it can appear as part of a biological explanation.
Scenario B: the biologists are uncertain if the Honeycomb Conjecture is true, but they nevertheless deploy it as part of a biological explanation.
It seems to me that advocates of explanatory indispensability arguments must settle on Scenario B. To see why, suppose that Scenario A is true. Then the truth of the Conjecture is presupposed when we give the explanation, and so the explanation cannot give us a reason to believe that the Conjecture is true. A related point concerns the evidence that the existence of the explanation is supposed to confer on the Conjecture according to Scenario B. Does anybody really think that the place of this conjecture in this explanation gave biologists or mathematicians a new reason to believe that the Conjecture is true? The worry seems even more pressing if we put the issue in terms of the existence of entities: who would conclude from the existence of this explanation that hexagons exist?

Hales, T. C. (2000). "Cannonballs and Honeycombs." Notices Amer. Math. Soc. 47: 440-449.

Hales, T. C. (2001). "The Honeycomb Conjecture." Disc. Comp. Geom. 25: 1-22.

Sunday, July 19, 2009

Two New Drafts: Surveys on "Philosophy of Mathematics" and "The Applicability of Mathematics"

I have posted preliminary drafts of two survey articles that are hopefully of interest to readers of this blog. The first is for the Continuum Companion to the Philosophy of Science, edited by French and Saatsi, on "Philosophy of Mathematics":
In this introductory survey I aim to equip the interested philosopher of science with a roadmap that can guide her through the often intimidating terrain of contemporary philosophy of mathematics. I hope that such a survey will make clear how fruitful a more sustained interaction between philosophy of science and philosophy of mathematics could be.
The second is for the Internet Encyclopedia of Philosophy on "The Applicability of Mathematics":
In section 1 I consider one version of the problem of applicability tied to what is often called "Frege's Constraint". This is the view that an adequate account of a mathematical domain must explain the applicability of this domain outside of mathematics. Then, in section 2, I turn to the role of mathematics in the formulation and discovery of new theories. This leaves out several different potential contributions that mathematics might make to science such as unification, explanation and confirmation. These are discussed in section 3 where I suggest that a piecemeal approach to understanding the applicability of mathematics is the most promising strategy for philosophers to pursue.
In line with the aims of the IEP, my article is more introductory, but hopefully points students to the best current literature.

Both surveys are of course somewhat selective, but comments and suggestions are more than welcome!

Thursday, July 9, 2009

Colyvan Blocks the "Easy Road" to Nominalism

In a paper posted on his webpage listed as forthcoming in Mind, Mark Colyvan launches a new offensive against fictionalists like Azzouni, Melia and Yablo. They present a non-platonist interpetation of the language of mathematics and science that, they argue, does not require the "hard road" that Field took. Recall that Field tried to present non-mathematical versions of our best scientific theories. As Colyvan describes the current situation, though, "There are substantial technical obstacles facing Field's project and thse obstacles have prompted some to explore other, easier options" (p. 2). Colyvan goes on to argue that, in fact, these fictionalists do require the success of Field's project if their interpretations are to be successful.

I like this conclusion a lot, and it is actually superficially similar to what I argued for in my 2007 paper "A Role for Mathematics in the Physical Sciences". But what I argued is that Field's project is needed to specify a determinate content to mixed mathematical statements (p. 269). Colyvan takes a different and perhaps more promising route. He argues that without Field's project in hand, the fictionalist is unable to convincingly argue that apparent reference to mathematical entities is ontologically innocent. This is especially difficult given the prima facie role of mathematics in scientific explanation:
The response [by Melia] under consideration depends on mathematics playing no explanatory role in science, for it is hard to see how non-existent entities can legitimately enter into explanations (p. 11, see also p. 14 for Yablo).
I have noted this explanatory turn in debates about indispensability before, but here we see Colyvan moving things forward in a new and interesting direction. Still, I continue to worry that we need a better positive proposal for the source of the explanatory contributions from mathematics, especially if it is to bear the weight of defending platonism.

Saturday, June 20, 2009

New Draft: Mathematics, Science and Confirmation Theory

Here is the latest version of my paper from the PSA. As noted earlier, the goal of the session was to establish some links between philosophy of mathematics and philosophy of science. My aim was to make the connection through confirmation, although all I have done so far in this paper is raised the issue in what is hopefully a useful and novel way. This part of an ongoing project, so comments are certainly welcome!

Mathematics, Science and Confirmation
Abstract: This paper begins by distinguishing intrinsic and extrinsic contributions of mathematics to scientific representation. This leads to two investigations into how these different sorts of contributions relate to confirmation. I present a way of accommodating both contributions that complicates the traditional assumptions of confirmation theory. In particular, I argue that subjective Bayesianism does best accounting for extrinsic contributions, while objective Bayesianism is more promising for intrinsic contributions.

Saturday, April 11, 2009

New Draft: Abstract Representations and Confirmation

Here is a recent draft of a paper I have been working on throughout my year at the Pittsburgh Center for the Philosophy of Science. It corresponds roughly to chapters III and IV of my book project where I go into more detail with examples and the significance for confirmation. I hope to post a more comprehensive overview of the project soon, but for now this may interest those working on both modeling and indispensability arguments.

Abstract: Many philosophers would concede that mathematics contributes to the abstractness of some of our most successful scientific representations. Still, it is hard to know what this abstractness really comes to or how to make a link between abstractness and success. I start by explaining how mathematics can increase the abstractness of our representations by distinguishing two kinds of abstractness. First, there is an abstract representation that eschews causal content. Second, there are families of representations with a common mathematical core that is variously interpreted. The second part of the paper makes a connection between both kinds of abstractness and success by emphasizing confirmation. That is, I will argue that the mathematics contributes to the confirmation of these abstract scientific representations. This can happen in two ways which I label "direct" and "indirect". The contribution is direct when the mathematics facilitates the confirmation of an accurate representation, while the contribution is indirect when it helps the process of disconfirming an inaccurate representation. Establishing this conclusion helps to explain why mathematics is prevalent in some of our successful scientific theories, but I should emphasize that this is just one piece of a fairly daunting puzzle.

Update (July 23, 2009): I have now linked to a new version of the paper.

Update (Sept. 30, 2010): This paper has been removed.

Tuesday, March 24, 2009

It's Official: Math Can Do Anything

At least according to this IBM ad.

Sunday, February 8, 2009

Wilson on the Missing Physics

In “Determinism and the Mystery of the Missing Physics” (BJPS Advance Access) Mark Wilson uses the debate about determinism and classical physics to make the more general point about “the unstable gappiness that represents the natural price that classical mechanics must pay to achieve the extraordinary success it achieves on the macroscopic level” (3). Wilson focuses mostly on Norton’s “dome” example and Norton’s conclusion that it shows that classical mechanics is not deterministic. The main objection to this conclusion is that Norton relies on one particular fragment of classical mechanics, and only finds a counterexample to determinism by mistreating what are really “descriptive holes” (10). By contrast, Wilson argues that there are different fragments to classical mechanics: (MP) mass point particle mechanics, (PC) the physics of rigid bodies with perfect constraints (analytic mechanics) and (CM) continuum mechanics. Norton's example naturally lies in (PC). Each fragment has its own descriptive holes which become manifest when we seek to understand the motivation for this or that mathematical technique or assumption at the basis of a treatment of a given system. Typically, a hole in one fragment can be fixed by moving to another fragment, but then that fragment itself has its own holes that prevent a comprehensive treatment. As a result, Wilson concludes that there is no single way the world has to be for “classical mechanics” to be true, and, in particular, there is no answer to the question of whether or not classical mechanics is deterministic.

I think Wilson has noticed something very important about the tendencies of philosophers of science: philosophical positions are typically phrased in terms of how things are quite generally or universally, but our scientific theories, when examined, are often not up to the task of answering such general questions. It seems to me that Wilson opts to resolve this situation by rejecting the philosophical positions as poorly motivated. But another route would be to try to recast the philosophical positions in more specific terms. For example, if, as Wilson argues, descriptive holes are more or less inevitable in these sorts of cases, then a suitably qualified kind of indeterminism cashed out in terms of the existence of these holes can be vindicated. Other debates, like the debate about scientific realism, seem to me to be in need of similar reform, rather than outright rejection.

Friday, December 19, 2008

Meyer on Field-style Reformulations of Statistical Mechanics

Glen Meyer offers an in-depth discussion of Field's program to nominalize science with special emphasis on the challenges encountered with classical equilibrium statistical mechanics (CESM). He makes a number of excellent points along the way, but what I like most is his focus on the prevalence of an appeal to what some call "surplus" mathematical structure, i.e. mathematics that has no natural physical interpretation. As he argues, Field could reconstruct configuration spaces for point particles using physical points in space-time, but would face difficulties extending this approach to phase spaces and probability distributions on phase spaces.

One novelty of the paper is a distinction between interpretation and representation. Mathematical theories have some mathematical terms with a semantic reference and a representational role, but other mathematical terms may have a semantic reference with no representational role. When idealizations involve this latter kind of term, Field-style reformulations are in trouble. For example, Meyer discusses the need to treat certain discrete quantities as continuous in the derivation of the Maxwell-Boltzmann distribution law:
The intended ('intrinsic') interpretations of axioms describing a certain structure forces that structure to represent, as it were, in its entirety, i.e., that this structure be exemplified in the subject matter of the theory. Any introduction of the idealization above at the nominalistic level will therefore force us to adopt assumptions about the physical world that the platonistic theory, despite its use of this idealization, does not make. Unlike the case of point particles, this idealization is not part of the nominalistic content of the platonistic theory and therefore does not belong in any nominalistic reformulation. Without it, however, we have no way of recovering this part of CESM (p. 37).
Here we have a derivation that ordinary theories can ground, but that Field-style nominalistic theories cannot. I agree with Meyer here, but it raises the further issue: why is it so useful to make these sorts of non-physical idealizations? It may just be a pragmatic issue of convenience, or perhaps there is something deeper to say about how the mathematics contributes without representing? (See Batterman's recent paper for one answer.)