Showing posts with label epistemology. Show all posts
Showing posts with label epistemology. Show all posts
Monday, August 30, 2010
New Book: Sociological Aspects of Mathematical Practice
Benedikt Löwe and Thomas Müller have edited an interesting collection of papers from the collaborative PhiMSAMP project. They have taken the very welcome step of making all the papers freely available for download for research use at this address. As the related page makes clear, this group aims to "to bring young researchers with foundational and sociological attitudes together and discuss a unified approach towards a philosophy of mathematics that includes both sociological analyses but is able to deal with the status of an epistemic exception that mathematics forms among the sciences."
Tuesday, August 10, 2010
Bergmann and Kain receive major Templeton award
Congratulations to my Purdue colleagues Mike Bergmann and Pat Kain who have been awarded a major Templeton grant to study how knowledge works in morality and religion! The Purdue press release describes several upcoming conferences on the general theme of skepticism and disagreement and their implications for knowledge in these domains.
Wednesday, July 14, 2010
Gettier Cases for Mathematical Concepts
Although I noted the appearance of this book back when it came out, it is only recently that I have had the chance to read Jenkins' Grounding Concepts. I agree with the conclusion of Schechter's review that "Anyone interested in the epistemology of arithmetic or the nature of a priori knowledge would profit from reading it." But at one point Schechter asks
In the last chapter of my book I discuss the issue. While I do not ultimately agree with Jenkins on all points, I think she has a very good objection to Peacocke's views. Here is how I summarize the issue in the manuscript. I have adapted Jenkins map cases to the sort of mathematical concept cases that I believe Schechter is after: Jenkins'
In supporting Jenkins' view, it would be helpful to have an intuitive case of a thinker who forms a justified true belief that is not knowledge on the basis of a competent conceptual examination of an ungrounded concept. That is to say, it would be helpful to have a clear example of a Gettier case for concepts.Jenkins uses these sorts of cases to undermine Peacocke's account of a priori mathematical knowledge. But she often resorts to comparisons between maps and concepts in a way that makes the point less convincing.
In the last chapter of my book I discuss the issue. While I do not ultimately agree with Jenkins on all points, I think she has a very good objection to Peacocke's views. Here is how I summarize the issue in the manuscript. I have adapted Jenkins map cases to the sort of mathematical concept cases that I believe Schechter is after: Jenkins'
basic concern is that even a perfectly reliable concept is not conducive to knowledge if our possession of that concept came about in the wrong way. She compares two cases where we wind up with what is in fact a highly accurate map. In the first case, which is analogous to a Gettier case, a trustworthy friend gives you a map which, as originally drawn up by some third party, was deliberately inaccurate. The fact that the map has become highly accurate and that you have some justification to trust it is not sufficient to conclude that your true, justified beliefs based on the map are cases of knowledge. In the second case, simply finding a map which happens to be accurate and trusting it "blindly and irrationally all the same" will block any beliefs which you form from being cases of knowledge. Jenkins extends these points about maps to our mathematical concepts:I agree with Jenkins that examples of this sort show the need for either some grounding for our concepts or some non-conceptual sources of evidence.A concept could be such that its possession conditions are tied to the very conditions which individuate the reference of that concept (...), but not in the way that is conducive to our obtaining knowledge by examining the concept (Jenkins 2008, p. 61).The first sort of problem could arise if the concept came to us via some defective chain of testimony. For example, a "crank" mathematician develops a new mathematical theory with his own foolishly devised concepts and passes them off to our credulous high school mathematics teacher, who teaches the theory to us. The mere fact that the crank mathematician has happened to pick out the right features of some mathematical domain is insufficient to confer knowledge on us. The second kind of problem would come up if I was, based on a failure of self-knowledge, like the crank mathematician myself, coming up with new mathematical concepts based on my peculiar reactions to Sudoku puzzles. Again, this approach to mathematics would not lead to knowledge even if my concepts happened to reflect genuine features of the mathematical world.
Tuesday, July 7, 2009
Mancosu on Mathematical Style
Paolo Mancosu continues his innovative work in the philosophy of mathematics with a thought-provoking survey article on Mathematical Style for the Stanford Encyclopedia of Philosophy. From the introductory paragraph:
The essay begins with a taxonomy of the major contexts in which the notion of ‘style’ in mathematics has been appealed to since the early twentieth century. These include the use of the notion of style in comparative cultural histories of mathematics, in characterizing national styles, and in describing mathematical practice. These developments are then related to the more familiar treatment of style in history and philosophy of the natural sciences where one distinguishes ‘local’ and ‘methodological’ styles. It is argued that the natural locus of ‘style’ in mathematics falls between the ‘local’ and the ‘methodological’ styles described by historians and philosophers of science. Finally, the last part of the essay reviews some of the major accounts of style in mathematics, due to Hacking and Granger, and probes their epistemological and ontological implications.As Mancosu says later in the article "this entry is the first attempt to encompass in a single paper the multifarious contributions to this topic". So it is wide-open for further philosophical investigation!
Saturday, June 20, 2009
New Draft: Mathematics, Science and Confirmation Theory
Here is the latest version of my paper from the PSA. As noted earlier, the goal of the session was to establish some links between philosophy of mathematics and philosophy of science. My aim was to make the connection through confirmation, although all I have done so far in this paper is raised the issue in what is hopefully a useful and novel way. This part of an ongoing project, so comments are certainly welcome!
Mathematics, Science and Confirmation
Abstract: This paper begins by distinguishing intrinsic and extrinsic contributions of mathematics to scientific representation. This leads to two investigations into how these different sorts of contributions relate to confirmation. I present a way of accommodating both contributions that complicates the traditional assumptions of confirmation theory. In particular, I argue that subjective Bayesianism does best accounting for extrinsic contributions, while objective Bayesianism is more promising for intrinsic contributions.
Mathematics, Science and Confirmation
Abstract: This paper begins by distinguishing intrinsic and extrinsic contributions of mathematics to scientific representation. This leads to two investigations into how these different sorts of contributions relate to confirmation. I present a way of accommodating both contributions that complicates the traditional assumptions of confirmation theory. In particular, I argue that subjective Bayesianism does best accounting for extrinsic contributions, while objective Bayesianism is more promising for intrinsic contributions.
Sunday, February 22, 2009
Post-blogging the Central: Plantinga and Dennett
The Central APA in Chicago this past weekend seemed fairly empty, although I heard from one of the organizers that registrations this year were about the same as last year. One of the more interesting sessions was in the very last time slot, and had Dennett commenting on Plantinga's "Science and Religion: Where the Conflict Really Lies". Partly what was remarkable about the session was how many people were there. The room was changed at the last minute to accommodate the additional interest, but even so, it was still standing room only. With at least 200 people packed into a small conference room, it was certainly one of the better attended APA events that I have been to.
I had to leave early to make my flight, so I only heard Plantinga's talk. Here I didn't hear much that was new. In the first half Plantinga argued that a committed theist could accept evolution because evolution per se is compatible with theism. This is mainly because the process of natural selection with random variation was said to be consistent with a divine plan which guided what we see as random. I am not sure how much this point of view depends on Plantinga's view that the warrant for theism is basic, but granting that point, I can see the coherence of his position.
The second half of the presentation argued that there is a quasi-religious "naturalism" which is in fact in tension with belief in evolution. Here Plantinga rehearsed his notorious argument that the combination of naturalism and evolution is self-defeating because it undermines the belief in the reliability of our cognitive faculties, and so provides a defeater for these beliefs.
Hearing the argument again drew my attention to one of the steps that seems very problematic. Plantinga's first premise is that P(R/N&E) is low. Here R is the belief that our cognitive faculties are reliable, N is naturalism and E is evolutionary theory. My concern is that even if this probability is low, that is irrelevant to the existence of defeaters. For a basic point about conditionalizing is that we should only conditionalize on our total evidence. Often this is captured by some kind of K meant to encapsulate all our background knowledge. So, even if P(R/N&E) is low, P(R/N&E&K) may be higher, and actually end up being high enough to avoid a defeater for R.
If we ignore total evidence, we can come up with easy defeaters for theism T. For example P(T/S) is low, where S is suffering. But of course theists are not forced to conditionalize on S, but can also include other beliefs from their store of background knowledge.
I am not an expert on the discussion of this argument, so maybe someone has made this objection before. Any comments are welcome, especially by those who saw the rest of the session!
Update: there is now an extended description of the session here.
I had to leave early to make my flight, so I only heard Plantinga's talk. Here I didn't hear much that was new. In the first half Plantinga argued that a committed theist could accept evolution because evolution per se is compatible with theism. This is mainly because the process of natural selection with random variation was said to be consistent with a divine plan which guided what we see as random. I am not sure how much this point of view depends on Plantinga's view that the warrant for theism is basic, but granting that point, I can see the coherence of his position.
The second half of the presentation argued that there is a quasi-religious "naturalism" which is in fact in tension with belief in evolution. Here Plantinga rehearsed his notorious argument that the combination of naturalism and evolution is self-defeating because it undermines the belief in the reliability of our cognitive faculties, and so provides a defeater for these beliefs.
Hearing the argument again drew my attention to one of the steps that seems very problematic. Plantinga's first premise is that P(R/N&E) is low. Here R is the belief that our cognitive faculties are reliable, N is naturalism and E is evolutionary theory. My concern is that even if this probability is low, that is irrelevant to the existence of defeaters. For a basic point about conditionalizing is that we should only conditionalize on our total evidence. Often this is captured by some kind of K meant to encapsulate all our background knowledge. So, even if P(R/N&E) is low, P(R/N&E&K) may be higher, and actually end up being high enough to avoid a defeater for R.
If we ignore total evidence, we can come up with easy defeaters for theism T. For example P(T/S) is low, where S is suffering. But of course theists are not forced to conditionalize on S, but can also include other beliefs from their store of background knowledge.
I am not an expert on the discussion of this argument, so maybe someone has made this objection before. Any comments are welcome, especially by those who saw the rest of the session!
Update: there is now an extended description of the session here.
Sunday, February 1, 2009
Weisberg on Models of Cognitive Labor in Science
Anyone who enjoyed the old cellular automata game of Life will have fun with the Java application that Michael Weisberg has made available on his webpage. Here Weisberg gives a small piece of the broader research project that he has been carrying out with Ryan Muldoon concerning how to understand the division of labor in successful scientific communities. As explained here the approach adopts a landscape approach where regions correspond to research strategies, and where different regions have different levels of epistemic significance. In the model agents then explore these landscapes according to different strategies. A preliminary result is that populations of "mavericks" who deliberately avoid regions explored by others do best. While the significance of this for the original question is not entirely clear, this is certainly an exciting way to investigate the issue!
Tuesday, January 13, 2009
New Book: Grounding Concepts: An Empirical Basis for Arithmetical Knowledge
C. S. Jenkins' relatively new book looks like an exciting contribution to the epistemology of mathematics that aims to relate debates in the philosophy of mathematics to some more recent work on concepts and the a priori. Based on the title and on her earlier paper, I had expected that Jenkins aimed to defend some kind of neo-Millian view of arithmetic, in line with Kitcher. But this seems to have been a mistake. Her preface lays out a clear desire to defend the a priority of arithmetic:
(1) that arithmetical truths are known through an examination of our arithmetical concepts;It is (1) and (3) which might not seem to initially sit well together, so I look forward to seeing how Jenkins can reconcile some of kind a priorism with some kind of empiricism.
(2) that (at least our basic) arithmetical concepts map the arithmetical structure of the independent world;
(3) that this mapping relationship obtains in virtue of the normal functioning of our sensory apparatus. (x)
Wednesday, October 8, 2008
Intuition of Objects vs. Holism
In a previous post I wondered what the role of intuition of quasi-concrete objects like stroke-inscriptions really was in Parsons' overall epistemology of mathematics. After finally finishing the book, it seems that one clear role, at least, is Parsons' objections to holism of the sort familiar from Quine and championed in more detail for mathematics by Resnik. In the last chapter of the book Parsons makes this point:
While this objection to holism is quite persuasive, Parsons is at pains to emphasize how modest it really is. He offers some additional discussion of the implications for set theory, but the book seems primarily focused on what distinguishes arithmetic from other mathematical theories. It is an impressive achievement that I am sure will frame much of philosophy of mathematics for a long time.
Intuition does play a role in making arithmetic evident to the degree that it is, in that there is a ground level of arithmetic, not extending very far, that is intuitively evident. Furthermore, the objects that play the role of numbers in this low-level arithmetic can continue to do so in a more full-blooded arithmetic theory.After noting that logical notions allow this further extension, he insists that
the role of intuition does not disappear, because it is central to our conception of a domain of objects satisfying the principles of arithmetic ... an intuitive domain witnesses the possibility of the structure of the numbers (336).Here, then, we have a definite epistemic role for intuition of objects. It helps us to explain what is different about arithmetic, or at least the fragment of arithmetic that is closely related to these intuitions. (In chapter 7, this fragment is said to not even include exponentiation, so it fars fall short of PRA.)
While this objection to holism is quite persuasive, Parsons is at pains to emphasize how modest it really is. He offers some additional discussion of the implications for set theory, but the book seems primarily focused on what distinguishes arithmetic from other mathematical theories. It is an impressive achievement that I am sure will frame much of philosophy of mathematics for a long time.
Sunday, August 10, 2008
Mathematics, Proper Function and Concepts
Continuing the previous post, I think what bothers me about the proper function approach is the picture of cognitive faculties as innate. This forces the naturalist to account for their origins in evolutionary terms, and this seems hard to do for the faculties that would be responsible for justified mathematical beliefs. A different approach, which should be compatible with the spirit of the proper function approach, would be to posit only a few minimally useful innate faculties along with the ability to acquire concepts. If we think of concepts as Peacocke and others do, then acquiring a mathematical concept requires tacitly recognizing certain principles involving that concept (an "implicit conception"). Then, if a thinker generated a mathematical belief involving that concept and followed the relevant principles, we could insist that the belief was justified.
This would lead to something like:
This would lead to something like:
Jpfc: S’s belief B is justified iff (i) S does not take B to be defeated, (ii) the cognitive faculties producing B are (a) functioning properly, (b) truth-aimed and (c) reliable in the environments for which they were ‘designed’, and (iii) the production of B accords with the implicit conceptions of the concepts making B up.In certain cases, perhaps involving simple logical inferences, the innate cognitive faculties would themselves encode the relevant implicit conceptions, and so clause (iii) would be redundant. But in more advanced situations, where we think about electrons or groups, clause (iii) would come into play. As far as I can tell, this proposal would allow for justified beliefs in demon worlds. For example, if an agent was in a world where groups had been destroyed (if we can make sense of that metaphysical possibility), her group beliefs could still be justified. In fact, the main objection that I foresee to this proposal is that it makes justification too easy, but presumably that is also an objection that the proper function proposal faces for analogous cases.
Wednesday, July 30, 2008
Mathematics and Proper Function
I just finished reading fellow Purdue philosopher Michael Bergmann’s Justification Without Awareness, and would certainly want to echo Fumerton’s comment that “It is one of the best books in epistemology that I have read over the past couple of decades and it is a must read for anyone seriously interested in the fundamental metaepistemological debates that dominate contemporary epistemology.”
The central positive claim of the book is that the justification of agent’s beliefs should be characterized in terms of proper function rather than reliability:
Still, it is not clear how the proper function approach marks an improvement when it comes to the justification of mathematical beliefs. This is not Bergmann’s focus, but I take it that a problem here would call into question the account. To see the problem, consider a naturalized approach to proper function that aims to account for the design of the cognitive faculties of humans in terms of evolutionary processes. I can see how this might work for some simple mathematical beliefs, e.g. addition and subtraction of small numbers. But when it comes to the mathematics that we actually take ourselves to know, it is hard to see how a faculty could have evolved whose proper function would be responsible for these mathematical beliefs. The justification of our beliefs in axioms of advanced mathematics does not seem to work the same way as the justification of the mathematical beliefs that might confer some evolutionary advantage. If that’s right, then the proper function account might posit a new faculty for advanced mathematics. But it’s hard to see how such a faculty could have evolved. Another approach would be to side with Quine and insist that our justified mathematical beliefs are justified in virtue of their contribution to our justified scientific theories. The standard objection to this approach is that it does not accord with how mathematicians actually justify their beliefs. More to the point, it is hard to see how the cognitive faculties necessary for the evaluation of these whole theories could have evolved either.
A theist sympathetic to the proper function approach might take the failure of naturalistic proper functions as further support for their theism. But if that’s the case, then the claim that proper function approaches are consistent with some versions of naturalism, defended in section 5.3.1, needs to be further qualified.
The central positive claim of the book is that the justification of agent’s beliefs should be characterized in terms of proper function rather than reliability:
Jpf: S’s belief B is justified iff (i) S does not take B to be defeated and (ii) the cognitive faculties producing B are (a) functioning properly, (b) truth-aimed and (c) reliable in the environments for which they were ‘designed’ (133).The main advantage of this proper function approach over a simple focus on reliability is that a person in a demon world can still have justified perceptual beliefs. This is because their faculties were ‘designed’ for a normal world and they can still function properly in the demon world where they are not reliable.
Still, it is not clear how the proper function approach marks an improvement when it comes to the justification of mathematical beliefs. This is not Bergmann’s focus, but I take it that a problem here would call into question the account. To see the problem, consider a naturalized approach to proper function that aims to account for the design of the cognitive faculties of humans in terms of evolutionary processes. I can see how this might work for some simple mathematical beliefs, e.g. addition and subtraction of small numbers. But when it comes to the mathematics that we actually take ourselves to know, it is hard to see how a faculty could have evolved whose proper function would be responsible for these mathematical beliefs. The justification of our beliefs in axioms of advanced mathematics does not seem to work the same way as the justification of the mathematical beliefs that might confer some evolutionary advantage. If that’s right, then the proper function account might posit a new faculty for advanced mathematics. But it’s hard to see how such a faculty could have evolved. Another approach would be to side with Quine and insist that our justified mathematical beliefs are justified in virtue of their contribution to our justified scientific theories. The standard objection to this approach is that it does not accord with how mathematicians actually justify their beliefs. More to the point, it is hard to see how the cognitive faculties necessary for the evaluation of these whole theories could have evolved either.
A theist sympathetic to the proper function approach might take the failure of naturalistic proper functions as further support for their theism. But if that’s the case, then the claim that proper function approaches are consistent with some versions of naturalism, defended in section 5.3.1, needs to be further qualified.
Wednesday, July 2, 2008
Intuition of Quasi-Concrete Objects
In his intriguing discussion of our intuition of quasi-concrete objects Parsons focuses on a series of stroke-inscriptions (familiar from Hilbert) and their respective types. For Parsons, a perception or imagining of the inscription is not sufficient for an intuition of the type:
Unfortunately, Parsons says little about what this concept is or what role it plays in the intuition of the type. The risk, to my mind, is that a clarification of the role for this concept might make the perception or imagination of the token irrelevant to the intuition. If, for example, we think of concepts along Peacocke's lines, then possessing the concept of a type is sufficient to think about the type. Some might think that acquiring the concept requires the perception or imagination of the token, but Parsons says nothing to suggest this. I would like to think this has something to do with his view that the token is an intrinsic representation of the type, but the connection is far from clear to me.
(Logic Matters has an extended discussion of the Parsons book as well.)
I do not want to say, however, that seeing a stroke-inscription necessarily counts as intuition of the type it instantiates. One has to approach it with the concept of the type; first of all to have the capacity to recognize other situations either as presenting the same type, or a different one, or as not presenting a string of this language at all. But for intuiting a type something more than mere capacity is involved, which, at least in the case of a real inscription, could be described as seeing something as the type (165).Again, later Parons says that "intuition of an abstract object requires a certain conceptualization brought to the situation by the subject" (179).
Unfortunately, Parsons says little about what this concept is or what role it plays in the intuition of the type. The risk, to my mind, is that a clarification of the role for this concept might make the perception or imagination of the token irrelevant to the intuition. If, for example, we think of concepts along Peacocke's lines, then possessing the concept of a type is sufficient to think about the type. Some might think that acquiring the concept requires the perception or imagination of the token, but Parsons says nothing to suggest this. I would like to think this has something to do with his view that the token is an intrinsic representation of the type, but the connection is far from clear to me.
(Logic Matters has an extended discussion of the Parsons book as well.)
Subscribe to:
Posts (Atom)