Jono Henshaw

Chaos Theory as a description of the sociology of mathematics

E.g. any general question that looks a bit like Fermat’s Last Theorem gets sucked into the Fermat’s Last Theorem attractor.

What should we say about the psychological variables?

Suppose for the sake of argument that theorems are the attractors. And suppose that the (chaos) theory turns out not to depend on any psychological variables very much. Then maybe this will make a good argument for (a pragmatist form of) mathematical realism.

A fantastic idea: what are the psychological variables involved in repellors? Good example: non-Euclidean geometry. What sorts of schools should we run to train kids to think the repellors? And what about mystical traditions: can they help us go towards repellors?

Read something by Prigogine? (<<http://en.wikipedia.org/wiki/Prigogine)>>>>)

[[blue

Jason - I was writing this as an e-mail but it got long and I though this might be easier.

Before I left today I was musing on whether a concept like Rawl’s reflective equilibrium could be applied to degrees of belief in axioms and their consequences. The idea is that raw intuitions about axioms would generate mathematical research, which would then uncover formal relationships between the axioms and many more consequences. This information would then feed back and refine the intuitions. This is similar to Rawl’s idea of interaction between general moral principles and specific moral judgments. ]]

{[pink Great idea. http://plato.stanford.edu/entries/reflective-equilibrium/ might be helpful … although the history is a bit odd, because Goodman almost certainly got the idea from Quine, who isn’t mentioned. BTW, Goodman is one of my heros, but I don’t think he’s good on this topic. But Quine might be. I have the relevant books in my office. ]}

{[blue A worry with Rawl’s concept in the moral sphere is that convergence is not guaranteed - intuitive moral principles may be incompatible with specific judgements to the extent that the system just oscillates, resulting in cognitive dissonance or outright rejection of the process. I don’t think this worry is as acute in the mathematical sphere, as intuition here settles more reasily (lol… guess that’s readily/easily). ]} {[pink good word ]} {[blue However, this obviously isn’t always true (whence the AC joke). ]} {[pink Good. I think you hvae to use that joke somehow. ]}

[[blue I think perhaps we were coming from two different angles at this point. Tell me if you agree. On the one hand, if we view the above as a diachronic ‘sociology’ of mathematical practice, then there is no problem if intuitions do not settle or are deductively inconsistent. We are simply describing a phenomenon and it does what it does.

On the other hand, if we are thinking about a particular mathematician, then the story looks different. The degree of belief she has in axioms or their consequences may motivate her direction of research, and perhaps even affect her beliefs about the physical world. To me it seems that it would be rational for this mathematician to desire (deductive) consistency in her degrees of belief. ]] {[pink This is very helpful. My idea was that believing in the consistency of her degrees of belief isn’t the same as those degrees of belief being consistent. ]} {[blue There is no sense in her having different degrees of belief in AC and WOP, because these two statements have the same implications regarding her other beliefs and her future actions. ]} {[pink There’s no sense in it. And yet it happens. ]} {[green And it may have good consequences. Or are you sure it doesn’t? ]} {[blue Thus, while her gut feeling may favour one over the other, she would (for instance) fail a Dutch book-style argument by favouring one in her actions. ]} {[pink Right. But you can’t use Dutch books — at least not in the normal way — because you don’t believe in deductive closure. And that’s an important point even aside from my issue about AC, WOP etc. Which I’m mentioning because I think the point I made earlier in this paragraph might be wrong, but no matter what you think about that you should mention the Dutch Book problem. ]}

{[blue The same argument can’t be made in the case of statements that are not known to be logically equivalent. Suppose our mathematician is willing to bet on the 10^100th digit of pi. She is presumably also willing to pay anything up to a dollar for a ticket showing “Worth $1 if pi= 4(1-1/3+1/5-…)”. The bookie can then make a Dutch book against our mathematician only if he knows the 10^100th digit of pi. And our mathematician knows he doesn’t! ]} {[pink good ]}

{[blue This might seem a bit trite. Aren’t the Dutch book arguments meant to be a bit less literal than this? ]} {[pink No! They’re meant to be very literal. And lots of people agree that deductive closure is a problem (I can give you a recent example later) … but hardly anybody has anything interesting to say about it, which is why this is such a good topic. ]} [[blue However, I think it’s possible to see any argument of this form as a requirement of consistency among existing beliefs. Thus, a bookkeeper who knew the same things as our mathematician could catch her if, for example, she ignored the law of finite additivity. Of course, if the bookie knows more than she does, he can catch her out no matter how consistent she is. So I’m only extending this assumption of ‘knowledge equality’ to include knowledge of formal logical truths.

This is basically why I think the difference between known and ‘potential’ logical truths matters. Hope that this makes some kind of sense… ]] {[pink Yes, this is all good. ]}

—-

Great example: We’re taught that infinity is naughty (in arithmetic / analysis), but then it keeps turning out that one-point compactifications are useful.

Can you reformulate set theory to separate the truth of AC and Zorn’s Lemma? Other examples along these lines?

Dutch books: obviously if the bookie knows more than the punter, he can win. If the bookie uses deductive relations which the better is unaware of, the bookie will win. What’s going on? “Knowing more” isn’t usually taken to include deductive relations, but it should be.

So a mathematician should have two aims: (1) inductive consistency, AND (2) learn as many deductive relations as possible [but only interesting ones].

Could do some classification of different uses of degrees of belief. Each of these will work in a different way. Some of them will be normative: e.g. a mathematician proves something which contradicts a well-known simple result, but without realising this. Then she SHOULD have realised that her proof was wrong.

The gut feeling definition is important, but there’s no point in imposing logical constraints on it.

The community definition is also important.

Also reflective equilibrium.

Start the essay with the analogy to physics.