Showing posts with label popular science. Show all posts
Showing posts with label popular science. Show all posts

Thursday, October 6, 2016

A Gentle Introduction to Scientific Realism

Here are some notes for a discussion that I led yesterday at Ohio State's Philosophy Club. There is nothing really new here, but these notes might be helpful for students who want a short, basic introduction to some aspects of the scientific realism debate. More thorough treatments can be found via Chakravartty's "Scientific Realism" entry on the Stanford Encyclopedia of Philosophy.

Does Science Tell the Truth? Notes for Ohio State Philosophy Club (Oct. 5, 2016)

Modern science presents us with many claims: the universe is more than 10 billion years old. The human species arose via evolution around 6 million years ago. Material objects are composed of very small molecules and atoms, built up out of even more fundamental particles.

Are these claims true? If they are true, how do we know they are true? The scientific realist argues that science aims at the truth and that many of the claims found in modern science actually are known to be true. However, many reject scientific realism: it is said to be too optimistic concerning our abilities. On this view, we may never know the truth about many scientific claims, and so we should adjust our aim to something more tractable.

What are the alternatives to scientific realism? One option is simple skepticism. The skeptic argues that we can never know any claim whose subject-matter goes beyond our personal, present experiences. In particular, we can never know about the past or the future. Some attribute this skeptical position to David Hume (1711-1776). It strikes many people as too pessimistic. Surely, there is something wrong with a philosophical argument if it reaches this pessimistic conclusion. I am more certain that I know that I have hands, to use G. E. Moore’s example (1873-1958, "Proof of an External World" (1939)), than I am in any philosophical premises of a skeptical argument. If this is right, then we do know certain claims, and the truth of these claims involves the past and the future.

We can draw on another example that Moore deploys in his lectures, Some Main Problems in Philosophy (1910-1911): "the sun and moon and all the immense number of visible stars, are each of them great masses of matter, and most of them many times larger than the earth" (p. 3). Here is an example of a kind of scientific common sense that most of us accept, and this shows we not only reject skepticism, but come some ways closer to the scientific realist.

There is an important intermediate position, though, that is best defended in our own time by Bas van Fraassen (b. 1941). He calls his view "constructive empiricism": it is based on a distinction between observable and unobservable entities. An observable entity is one that can be detected by an ordinary human being, unaided by instruments. So, a tree is observable because when it is there, and a human is appropriately close to it, the human can rightly come to believe that the tree is there simply by looking. But bacteria are unobservable because even when the bacteria are present, a human needs an instrument like a microscope to reliably detect it.

Clearly, it is easier to know the truth about observable entities. It is not trivial, though. The far side of the moon is observable in van Fraassen’s sense because if a human stands there, they can directly see its features (with a flashlight). But it is practically very difficult to get to the right position. Van Fraassen is not focused on these practical difficulties. He argues that there is a deeper kind of obstacle to knowing the truth regarding unobservable entities. As a result, he concludes, science should aim only at the truth regarding observable entities. He invented a special term for a collection of claims that get things right about observable entities: this collection or theory is empirically adequate. So, the constructive empiricist aims at empirical adequacy, and not truth. And much of our best modern science is, for the constructive empiricist, empirically adequate, even though we have no basis to conclude that it is true.

What is the difference, really, between truth and empirical adequacy? Consider the case of bacteria. If you are a scientific realist, then you believe in the existence of bacteria and their role in causing illnesses, e.g., from eating certain foods. However, if you are a constructive empiricist, you may use the bacteria theory, but you do not think that all of its claims are true. You accept only what it says about observable entities. So, the theory supports our practice of pasteurizing milk. Milk is observable, and it is observable that some people get sick drinking milk that has come directly from a cow. Heating is also observable, and we find that when we heat the milk, fewer people get sick from drinking the milk. All of this the constructive empiricist can accept. They can even use the word “bacteria”, but they do not think the claims about bacteria living in the milk, or being eliminated by the heat, are known to be true.

The scientific realist claims that the entire theory is true. Why would they add the truth of these claims to the empirical adequacy of the theory? One influential motivation is tied to explanation. The existence of bacteria is a crucial part of a good explanation for why pasteurization limits these illnesses tied to drinking milk. As realists put it, this is in fact the best explanation: the illnesses drop off because the bacteria are eliminated. But this explanation requires that the claims about unobservable entities be true. Our commitment to the bacteria explanation requires scientific realism. The constructive empiricist cannot offer this explanation.

Why should that matter? It seems a kind of wishful thinking: we want to have explanations, and so we adopt theories that allow us to explain what we observe. Often those explanations will appeal to unobservable entities. So our desire for explanations leads us to adopt scientific realism. Is this tie to explanation anything more than wishful thinking?

The realist responds that this form of reasoning is widespread and accepted by everyone who believes in substantial knowledge, i.e. everyone who is not a Humean skeptic. Why, for example, should we believe that the observable regularities that we find extend into the past and the future? Consider the very regularity that the constructive empiricist adopted for the case of pasteurization: when you heat milk, it is less likely to cause a certain kind of illness. This is what we have found in the past, but why accept that this pattern will continue into the future? One explanation of the past instances of the pattern is that we have a genuine regularity that is based somehow on the features of milk, heating and humans (the observable entities). This is a better explanation than the proposal that what we have found so far is just a massive coincidence.

Typically we accept the best explanation available, and believe its claims primarily because their truth does explain what we have found. This is inference to the best explanation (IBE). We employ it everyday life when (to borrow van Fraassen’s explanation) we conclude that there is a mouse in our house based on various sounds and visible signs. And the constructive empiricist uses it in a restricted way when they conclude that the bacteria theory is empirically adequate. And finally the scientific realist uses an unrestricted form of IBE when they conclude that the bacteria theory is true.

This brings us to the central issue that divides the constructive empiricist from the scientific realist. Is there a coherent way to restrict IBE to observable entities in a way that does not entail Humean skepticism? That is the realist challenge to the empiricist. Is there a convincing way to justify extending IBE from observable entities to all entities? That is the empiricist challenge to the realist. Let’s conclude by considering these two challenges in more detail.

Here is why it is difficult to restrict IBE and yet avoid skepticism. The arguments that try to show that one should not use IBE for unobservable entities seem to also show that one should not use IBE for observable entities. But if we don’t use IBE, then we seem forced to skepticism. An example of this problem recalls Descartes’ (1596-1650) method of doubt in the Meditations. He resolved to reject any claim if its truth could be doubted, even if that claim involved a fantastic scenario. So, I can doubt the existence of the past if I suppose that a powerful demon created me five minutes ago with all my memories intact. If we use the method of doubt to call into question IBE for unobservable entities, then it clearly extends to IBE for observable entities. And so we are forced to skepticism.

Constructive empiricists can respond by offering a different reason to worry about IBE for unobservable entities. Consider, they say, the history of science. The following pattern has played out many times in the history of science. A scientist uses IBE to justify their claim to the existence of a new sort of unobservable entity. That claim is then widely accepted, and leads to many additional scientific successes. However, after a period of time, a new scientific innovation is made, and the scientific community comes to reject that unobservable entity as an illusion. The worry, then, is that IBE for unobservable entities has a bad track-record. We should not use this method of forming beliefs because that method has been unreliable when arriving at the truth.

There are many examples that fit this pattern. One famous one concerns the "aether" that was proposed in the nineteenth century as the medium for light and then electro-magnetic radiation more generally. Here is how James Clark Maxwell (1831-1879) put it in 1878: "Whatever difficulties we may have in forming a consistent idea of the constitution of the aether, there can be no doubt that the interplanetary and interstellar spaces are not empty, but are occupied by a material substance or body, which is certainly the largest, and probably the most uniform body of which we have any knowledge" (1878). The same point applies to theories of disease: before bacteria and germs were blamed for disease, many blamed "bad air". The "miasma" theory, as it was called, had many successes, but is now dismissed as a massive error. Who, then, can be confident in our own realist commitments, given this poor track-record?

The advantage of this argument is that it is not fully general, and does not obviously support skepticism. For the empiricist can point out that there are fewer cases of these sort of errors for IBE when it is used only to draw conclusions about observable entities. For example, we have theories about how to build bridges so that they do not collapse. Here the theory is tested by its successes. Sometimes bridges still do collapse, but the focus on the observable seems to have helped us get these claims right.

Does this meet the original realist challenge? If IBE about unobservables really is so much more unreliable than IBE about observables, then the realist challenge has been met. However, it is not clear if the historical examples really support this interpretation. Perhaps IBE about unobservables as it is done now really is very reliable. Various realists have tried to pinpoint what is different about 21st century uses of IBE. This is an ongoing debate that combines historical and conceptual claims about how science has been, and is being, done.

Let us turn then to the difficulty in convincing someone who accepts IBE for observable entities to extend IBE to unobservable entities. What can the realist say to convince someone to be a scientific realist? The key move is the link between an explanation and the truth. If we have a genuine explanation for why something occurs, then the explanation is made up of true claims. Consider something that we have found throughout modern science in the 20th and 21st centuries: science has made enormous contributions to technology and has also made many novel predictions about experiments that we subsequently found to be correct. To take two examples: atomic physics led to the development of nuclear weapons, and biologists have mapped the human genome. What is the best explanation for all these successes, both practical and experimental? The best explanation is clearly that all of these theories developed by the scientists are true. So, on this second or "meta" level, the success of science supports scientific realism. This is just IBE applied to science itself.

The constructive empiricist has a powerful response. IBE is admitted to be appropriate for observable entities. But this argument uses IBE for unobservable entities: the truth of these scientific theories requires the existence of unobservable entities. So, this argument in fact presupposes that it is appropriate to use IBE for unobservable entities. It presupposes what is in fact at issue between the constructive empiricist and the scientific realist.

So, does science tell the truth? The two main positions in the philosophy of science respond with a qualified "yes". The scientific realist argues that most or all of what science says, when it has generated successes, is true. The constructive empiricist argues that a restricted part of what science says is true, namely the claims it makes about observables, past and future. Neither position argues that science tells the whole truth. For both, additional philosophical reflection is needed to figure out what is true in science, and so it seems that philosophy is needed to get at one kind of truth: the truth about science itself.

Thursday, February 2, 2012

Babies are Newtonians?

Following an earlier post noting the apparent Bayesian tendencies of babies, we now have word for fellow University of Missouri professor Kristy vanMarle that babies have innate knowledge of Newtonian physics.
From the Yahoo News summary "Infants Grasp Gravity with Innate Sense of Physics":
"We believe that infants are born with expectations about the objects around them, even though that knowledge is a skill that's never been taught," Kristy vanMarle, an assistant professor of psychological sciences at the University of Missouri, said in a statement. "As the child develops, this knowledge is refined and eventually leads to the abilities we use as adults."
To come to this conclusion, vanMarle and her colleague, Susan Hespos, a psychologist at Northwestern University, reviewed infant cognition research conducted over the last 30 years. They found that infants already have an intuitive understanding of certain physical laws by 2 months of age, when they start to track moving objects with both eyes consistently and can be tested with eye-tracking technology.
For instance, at this age they understand that unsupported objects will fall (gravity) and hidden objects don't cease to exist. In one test, researchers placed an object inside of a container and moved the container; 2-month-old infants knew that the hidden object moved with the container.
This innate "physics" knowledge only grows as the infants experience their surroundings and interact more with the world. By 5 months of age, babies understand that solid objects have different properties than noncohesive substances, such as water, the researchers found.
Note: Regular readers will notice that I have given into the Google/Blogger renovations and opted for the white background. Also, I have enabled mobile formatting for easy access to this blog on mobile devices.

Monday, August 22, 2011

Classic Mr. Show: 24 is the highest number

Some inspiration to start the semester:

Thursday, July 14, 2011

The unplanned impact of mathematics (Nature)

Peter Rowlett has assembled, with some other historians of mathematics, seven accessible examples of how theoretical work in mathematics led to unexpected practical applications. His discussion seems to be primarily motivated by the recent emphasis on the "impact" of research, both in Britain and in the US:
There is no way to guarantee in advance what pure mathematics will later find application. We can only let the process of curiosity and abstraction take place, let mathematicians obsessively take results to their logical extremes, leaving relevance far behind, and wait to see which topics turn out to be extremely useful. If not, when the challenges of the future arrive, we won't have the right piece of seemingly pointless mathematics to hand.
For philosophers, the most important example to keep in mind, I think, is the last one, offered by Chris Linton: the role of Fourier series in promoting the later "rigorization" of math:
In the 1870s, Georg Cantor's first steps towards an abstract theory of sets came about through analysing how two functions with the same Fourier series could differ.
Rowlett has a call for more examples on the BSHM website. Hopefully this will convince some funding agencies that immediate impact is not a fair standard!

Friday, May 27, 2011

Babies are Bayesians?

From the abstract of a recent paper in Science:
When 12-month-old infants view complex displays of multiple moving objects, they form time-varying expectations about future events that are a systematic and rational function of several stimulus variables. Infants’ looking times are consistent with a Bayesian ideal observer embodying abstract principles of object motion. The model explains infants’ statistical expectations and classic qualitative findings about object cognition in younger babies, not originally viewed as probabilistic inferences.

Sunday, May 8, 2011

Group Selection Explains "Why We Celebrate a Killing"?

In an otherwise thoughtful piece in the New York Times on the reactions to Bin Laden's killing, Jonathan Haidt throws in a weird flourish
There’s the lower level at which individuals compete relentlessly with other individuals within their own groups. This competition rewards selfishness.

But there’s also a higher level at which groups compete with other groups. This competition favors groups that can best come together and act as one. Only a few species have found a way to do this. ...

Early humans found ways to come together as well, but for us unity is a fragile and temporary state. We have all the old selfish programming of other primates, but we also have a more recent overlay that makes us able to become, briefly, hive creatures like bees. Just think of the long lines to give blood after 9/11. Most of us wanted to do something — anything — to help.
So,
last week’s celebrations were good and healthy. America achieved its goal — bravely and decisively — after 10 painful years. People who love their country sought out one another to share collective effervescence. They stepped out of their petty and partisan selves and became, briefly, just Americans rejoicing together.
The claim seems to be that the origins of these reactions in group selection means that displaying these reactions now is "good and healthy" because group selection benefits groups? Not the best argument, I would say.

Wednesday, April 27, 2011

Periodical Cicadas Invade Missouri!

Those interested in mathematical explanations of physical phenomena should take note: May 15th is the predicted date for the emergence of swarms of the "Great Southern Brood" of cicadas, whose life cycle is 13 years. More details are provided by the Columbia Missourian:
Periodical cicadas survive on a strategy of satiating their predators. They emerge in such large numbers that there will always be some left over to reproduce. After a while, predators get tired of eating the cicadas and leave them alone.

“If you walked outside and found the world swarming with Hershey Kisses, eventually you would get so sick of Hershey Kisses that you would never ever want to eat them again,” Kritsky said.
The mathematical explanation answers the question: why is their life cycle a prime number?

Thursday, March 17, 2011

Is experimental philosophy a part of science?

If we can trust the journal Science, then the answer is "yes".

Thursday, October 7, 2010

Okasha Takes On The Inclusive Fitness Controversy

In a helpful commentary in the current issue of Nature Samir Okasha summarizes the recent dispute about inclusive fitness. In an article from earlier this year E. O. Wilson and two collaborators argued that inclusive fitness (or kin selection) was dispensable from an explanation of altruistic behavior. For my purposes what is most interesting about this debate is that the Wilson argument depends on an alternative mathematical treatment which seems to get rid of the need for anything to track inclusive fitness. As a result, inclusive fitness is seen merely as a book-keeping device with no further explanatory significance.

Okasha suggests that the dispute is overblown and that each of the competing camps should recognize that a divergence in mathematical treatment need not signal any underlying disagreement. As he puts it at one point
Much of the current antagonism could easily be resolved — for example, by researchers situating their work clearly in relation to existing literature; using existing terminology, conceptual frameworks and taxonomic schemes unless there is good reason to invent new ones; and avoiding unjustified claims of novelty or of the superiority of one perspective over another.

It is strange that such basic good practice is being flouted. The existence of equivalent formulations of a theory, or of alternative modelling approaches, does not usually lead to rival camps in science. The Lagrangian and Hamiltonian formulations of classical mechanics, for example, or the wave and matrix formulations of quantum mechanics, tend to be useful for tackling different problems, and physicists switch freely between them.
This point is right as far as it goes, but my impression is that some biologists and philosophers of biology over-interpret the concept of fitness. If Wilson et. al. are correct, then there is simply no need to believe that inclusive fitness tracks any real feature of biological systems. And this interpretative result would be significant for our understanding of altruism and natural selection more generally.

Tuesday, April 6, 2010

Wash Post Reminds Us That There is No Perfect Climate Model

Here. There are some useful quotations from scientists, including:
If the models are as flawed as critics say, Schmidt said, "You have to ask yourself, 'How come they work?'"
What is missing from the article, though, is any discussion of the more or less risky claims which we might derive from examining a model or computer simulation. It seems that even though the models are highly detailed, most climate scientists are comfortable drawing only highly abstract conclusions. For example, they do not take seriously the temperature predictions for Indiana, but do take seriously the predications for the global mean temperature.

Monday, February 8, 2010

The Disunity of Climate Science

While there has been a lot of misleading coverage of the stolen e-mails from East Anglia, the Guardian offers an intriguing look inside the fallout from the more significant retraction of the 2007 IPCC report claims about the Himalayan icepack:
Speaking on condition of anonymity, several lead authors of the working group one (WG1) report, which produced the high-profile scientific conclusions that global warming was unequivocal and very likely down to human activity, told the Guardian they were dismayed by the actions of their colleagues.

"Naturally the public and policy makers link all three reports together," one said. "And the blunder over the glaciers detracts from the very carefully peer-reviewed science used exclusively in the WG1 report."

Another author said: "There is no doubt that the inclusion of the glacier statement was sloppy. I find it embarrassing that working group two (WG2) would have the Himalaya statement referred to in the way it was."

Another said: "I am annoyed about this and I do think that WG1, the physical basis for climate change, should be distinguished from WG2 and WG3. The latter deal with impacts, mitigation and socioeconomics and it seems to me they might be better placed in another arm of the United Nations, or another organisation altogether."

The scientists were particularly unhappy that the flawed glacier prediction contradicted statements already published in their own report. "WG1 made a proper assessment of the state of glaciers and this should have been the source cited by the impacts people in WG2," one said. "In the final stages of finishing our own report, we as WG1 authors simply had no time to also start double-checking WG2 draft chapters."

Another said the mistake was made "not by climate scientists, but rather the social and biological scientists in WG2 ... Clearly that WWF report was an inappropriate source, [as] any glaciologist would have stumbled over that number."
As I understand the science, the climate models used to support the central claims of the report are unequivocal. But they don't always give information relevant to policy makers such as exactly how much hotter it is going to get in Indiana or what year the Himalayan icepack will melt. This creates a temptation to leap in and provide more precise predictions than the models support. What is interesting here is that the "hard scientists" are blaming the "social and biological scientists" for giving in to this temptation.

Thursday, December 17, 2009

Dark Matter Rumors (cont.)

The results that prompted the rumors noted in an earlier post have now been unveiled. They involve the detection of Weakly Interacting Massive Particles (WIMPs) which are predicted by some theories of dark matter. The group has provided a helpful two-page summary, with the key paragraph:
In this new data set there are indeed 2 events seen with characteristics consistent with those expected from WIMPs. However, there is also a chance that both events could be due to background particles. Scientists have a strict set of criteria for determining whether a new discovery has been made, in essence that the ratio of signal to background events must be large enough that there is no reasonable doubt. Typically there must be less than one chance in a thousand of the signal being due to background. In this case, a signal of about 5 events would have met those criteria. We estimate that there is about a one in four chance to have seen two backgrounds events, so we can make no claim to have discovered WIMPs. Instead we say that the rate of WIMP interactions with nuclei must be less than a particular value that depends on the mass of the WIMP. The numerical values obtained for these interaction rates from this data set are more stringent than those obtained from previous data for most WIMP masses predicted by theories. Such upper limits are still quite valuable in eliminating a number of theories that might explain dark matter. (emphasis added)
So, Bryan's prediction was correct! Now if only the scientists would tell us what "reasonable doubt" amounts to ...

A Universal Pattern for Insurgents?

From this week's Nature:
The researchers collected data on the timing of attacks and number of casualties from more than 54,000 events across nine insurgent wars, including those fought in Iraq between 2003 and 2008 and in Sierra Leone between 1994 and 2003. By plotting the distribution of the frequency and size of events, the team found that insurgent wars follow an approximate power law, in which the frequency of attacks decreases with increasing attack size to the power of 2.5. That means that for any insurgent war, an attack with 10 casualties is 316 times more likely to occur than one with 100 casualties (316 is 10 to the power of 2.5).

[...]

To explain what was driving this common pattern, the researchers created a mathematical model that assumes that insurgent groups form and fragment when they sense danger, and strike in well-timed bursts to maximize their media exposure. The model gave results that resembled the power-law distribution of actual attacks.
This all seems a bit too easy, although I must admit I have not delved into the details of the actual model. I'm also a bit wary of the predictive power of the model, as with "He is now working to predict how the insurgency in Afghanistan might respond to the influx of foreign troops recently announced by US President Barack Obama". But at least this is yet one more case of a purported mathematical explanation in science.

Sunday, December 13, 2009

Dark Matter Rumors Persist

Philosophers interested in tracking how scientists argue for the existence of novel entities might want to stay tuned this week. Rumors of a big announcement, centered largely around the blog Resonances and this post, continue.

Thursday, October 15, 2009

The Polymath Project

Gowers and Nielsen offer in the current issue of Nature a report on the online collaboration in mathematics known as the Polymath Project. It is hard to know what to make of it all without delving into the details and trying to understand if there is anything special about this problem which lends itself to collaboration. But two passages jump out for the philosopher:
This theorem was already known to be true, but for mathematicians, proofs are more than guarantees of truth: they are valued for their explanatory power, and a new proof of a theorem can provide crucial insights.
The working record of the Polymath Project is a remarkable resource for students of mathematics and for historians and philosophers of science. For the first time one can see on full display a complete account of how a serious mathematical result was discovered. It shows vividly how ideas grow, change, improve and are discarded, and how advances in understanding may come not in a single giant leap, but through the aggregation and refinement of many smaller insights. It shows the persistence required to solve a difficult problem, often in the face of considerable uncertainty, and how even the best mathematicians can make basic mistakes and pursue many failed ideas. There are ups, downs and real tension as the participants close in on a solution. Who would have guessed that the working record of a mathematical project would read like a thriller?
At over 150 000 words, these records should keep some philosopher busy for a while!

Friday, October 9, 2009

Nobel Prize for Efficient Markets Hypothesis?

One of the core ideas driving the derivation of the Black-Scholes model is the efficient markets hypothesis. Exactly what this comes to is hopefully something I'll post on next week. But for now I'll pass on this from NPR's Marketplace:
Kai Ryssdal's final note.

Not so much news as a commentary on the state of the economic profession. The Nobel Prize in economics comes out Monday morning. I obviously have no idea who's going to win, but the markets think they do. The betting line at Ladbrokes, in London, has Eugene Fama of the University of Chicago as a 2-to-1 favorite.

That's all well and good except for this: Fama's best known for something called the Efficient Markets Theory. That the markets are, in essence, always right. I dunno, I'd say that's a tough sell after the year and a half we've just had. More to come on Monday.

Tuesday, September 8, 2009

Krugman on Mathematics and the Failure of Economics

Probably anyone who is interested in this article has already seen it, but Paul Krugman put out an article in Sunday's New York Times Magazine called "How Did Economics Get It So Wrong?". The article is very well-written, but a bit unsatisfying as it combines Krugman's more standard worries about macroeconomics with a short attack on financial economics. I am trying to write something right now about the ways in which mathematics can lead scientists astray, and one of my case studies in the celebrated Black-Scholes model for option pricing. Hopefully I can post more on that soon, but here is what Krugman says about it and similar models which are used to price financial derivatives and devise hedging strategies.

My favorite part is where Krugman says "the economics profession went astray because economists, as a group, mistook beauty, clad in impressive-looking mathematics, for truth". But he never really follows this up with much discussion of the mathematics or why it might have proven so seductive. Section III attacks "Panglossian Finance", but this is presented as if it assumes "The price of a company's stock, for example, always accurately reflects the company's value given the information available". But, at least as I understand it, this is not the "efficient market hypothesis" which underlies models like Black-Scholes. Instead, this hypothesis makes the much weaker assumption that "successive price changes may be considered as uncorrelated random variables" (Almgren 2002, p. 1). This is the view that prices over time amount to a "random walk". It has serious problems as well, but I wish Krugman had spent an extra paragraph attacking his real target.

Almgren, R. (2002). Financial derivatives and partial differential equations.
American Mathematical Monthly, 109: 1-12, 2002.

Friday, August 14, 2009

Computer Simulations Support Some New Mathematical Theorems

The current issue of Nature contains an exciting case of the productive interaction of mathematics and physics. As Cohn summarizes here, Torquato and Jiao use computer simulations and theoretical arguments to determine the densest way to pack different sorts of polyhedra together in three-dimensional space:
To find their packings, Torquato and Jiao use a powerful simulation technique. Starting with an initial guess at a dense packing, they gradually modify it in an attempt to increase its density. In addition to trying to rotate or move individual particles, they also perform random collective particle motions by means of deformation and compression or expansion of the lattice's fundamental cell. With time, the simulation becomes increasingly biased towards compression rather than expansion. Allowing the possibility of expansion means that the particles are initially given considerable freedom to explore different possible arrangements, but are eventually squeezed together into a dense packing.
A central kind of case considered is the densest packings of the Platonic solids. These are the five polyhedra formed using only regular polygons of a single sort, where the same number of polygons meet at each vertex: tetrahedron, icosahedron and octahedron (all using triangles), cube (using squares) and dodecahedron (using pentagons). Setting aside the trivial case of the cube, Torquato and Jiao argue that the densest packing for the icosohedron, octahedron and dodecahedron all have a similar feature. This is that the result from a simple lattice structure known as the Bravais lattice. Again, using Cohn's summary:
In such arrangements, all the particles are perfectly aligned with each other, and the packing is made up of lattice cells that each contain only one particle. The densest Bravais lattice packings had been determined previously, but it had seemed implausible that they were truly the densest packings, as Torquato and Jiao's simulations and theoretical analysis now suggest.
The outlier here is the tetrahedron, where the densest packing remains unknown.

Needless to say, there are many intriguing philosophical questions raised by this argument and its prominent placement in a leading scientific journal. To start, how do these arguments using computer simulations compare to other sorts of computer assisted proofs, such as the four color theorem or the more recent Kepler Conjecture? More to the point, does the physical application of these results have any bearing on the acceptability of using computer simulations in this way?

Thursday, July 16, 2009

El Niño Has Arrived. But What is El Niño?

According to Nature the lastest El Niño has begun in the Pacific. I got interested in this meteorological phenomenon back when I was living in California and coincidentally read Mike Davis' polemic Late Victorian Holocausts: El Niño Famines and the Making of the Third World . While a bit over the top, it contains a great section on the history of large-scale meteorology including the discovery of El Niño. As I discuss in this article, El Niño is a multi-year cyclical phenomenon over the Pacific that affects sea-surface temperature and pressure from India to Argentina. What I think is so interesting about it from a philosophy of science perspective is that scientists can predict its evolution once a given cycle has formed, but a detailed causal understanding of what triggers a cycle or what ends it remains a subject of intense debate. See, for example, this page for an introduction to ths science and here for a 2002 article by Kessler which asks if El Niño is even a cycle. This case provides yet one more case where causal ignorance is overcome by sophisticated science and mathematics.

Thursday, July 9, 2009

Scientists Wonder If Philosophy Makes You a Better Scientist

Over at Cosmic Variance Sean Carroll has initiated an ongoing discussion of the following passage from Feyerabend:
The withdrawal of philosophy into a “professional” shell of its own has had disastrous consequences. The younger generation of physicists, the Feynmans, the Schwingers, etc., may be very bright; they may be more intelligent than their predecessors, than Bohr, Einstein, Schrodinger, Boltzmann, Mach and so on. But they are uncivilized savages, they lack in philosophical depth — and this is the fault of the very same idea of professionalism which you are now defending.
With some hesitation Carroll concludes that "I tend to think that knowing something about philosophy — or for that matter literature or music or history — will make someone a more interesting person, but not necessarily a better physicist." (See comment 56 by Lee Smolin and comment 64 by Craig Callender for some useful replies.)

Beyond that debate, it's worth wondering how knowing some science and mathematics helps the philosopher of science and mathematics. Pretty much everyone in these areas of philosophy would agree that it does help, but exactly how is probably a controversial issue.