Tal Bachman wrote:[size=14][color=darkblue]
Can we please clarify exactly what a probability is? This will help as far as assessing its connection to rationality. Do you hold to a species of frequentism? Or, do you go for Bayesian interpretation? Push comes to shove when we consider an infinite number of possible outcomes. The classical N(A)/N definition of probability works well for situations with only a finite number of equally-likely outcomes. Even there the issue of whether the set of outcomes really is equally likely is a problem when we wish to provide a foundation for probability theory. In mathematical probability theory one assumes that there is a so called measure space (of total measure one) and a so called measure. From that point its all formal mathematics--complexity but no philosophical worries.
---Not sure if I can satisfy you here; all I mean by "probability" here is what Popper himself means - "likelihood" - when he declares that any estimate of the "probability", or likelihood, that a proposition is true has no rational basis. Popper does try to get round this in a variety of ways, but I'm hoping I don't need to dive into that whole mess. In a nutshell, Popper tries to construct a rational basis for "preferring" one theory to another which does not rely on some estimate - which would necessarily have to arise via inductive inference - of the likelihood of that theory actually being true.
OK, we will skip the whole issue of the nature of probability but I think there are monsters lurking there.
About the connection between an estimation of a proposition's probable truth, and rationality, I'm not sure what to say here other than if we did not have a refined probability-gauging talent, that we should have all perished long ago.
We do have probability gauging talent but it is instructive to note that there is evidence that what we are doing is something different than internally calculating within a consistent probability calculus. Rather we seem to be employing a set of heuristics that have the effect of (imperfectly) usually directing us to make likelihood judgements (or at least intuitions about what to do or believe).
For example, consider the following example (if you have heard this already it will be less dramatic).
You are on a game show and there are three doors. The host tells you that behind one of the doors there is a new car and behind the other two doors are goats. You choose a door, say number 2, and then before revealing what is behind that door the host opens one of the other doors to reveal a goat. Then asks if you would like to change your choice to the remaining closed door or not. What should you do?
Most people assert that it makes no difference if you switch doors at this point. You don't know which of the two doors which remain closed hides the car so aren't the chances 50/50?
But the fact is that you should switch. If you haven't heard this before and wonder how that could be right let me know and I will convince you.
Our brains make estimates of probabilities constantly. Even walking to the front door involves numerous such estimates based on what our brains have logged - that is, based precisely on estimates about the unobserved from the already observed. Those inferences are not infallible, of course; but fortuntately, each prediction which we make, even unconsciously, which does not come true, is accounted for and serves in healthy minds to improve the calibration of our cognitive prediction mechanisms. That is what the best research indicates, anyway.
I roughly agree but it remains possible that we aren't doing what we think we are doing. In my toy car example some naïve creature may think that the car was calculating probabilites and and rationaly infering that it must slow down if it doesn't want to keep crashing. Now suppose that I build into the car much more complicated intelligent module whose purpose is explain in english why the itself (the car) is doing this or that. The explanation it produces (for others and for itself) could be that the it is calculating probabilites and rationally infering that it must slow down.
It is a fact that (for normal people) our belief in a subsequent six, increases as the number of the observed tossed sixes increases. Evidently, evolution has filtered out those who don't think (or react) that way.
But wherein lies the rationality? I am tempted to be a jerk for the sake of argument and ask how can we assess that until we have a working definition of rationality.
Our brains comprise around 2% of our body weight, but they consume about
20% of our energy. Given what kind of burden that consumption of energy puts on us, what do you think the chances are that hundreds of thousands of years of evolution would select for brains which make simple inferences,
which however have no consistent relationship to probabilities as they actually exist in nature?The point is to dare to question whether induction in and of itself abstracted away from our theoretical understandings and deductive deliberations is truly rational (but we need to clarify the meaning of rationality again).
---But where would "theoretical understandings" about the world even come from, if not from induction itself?
Well, lets see if we can think of some possibilites just for the sake of argument.
When we see faces in the clouds or notice symmetries or other patterns in nature we may be applying a talent for noticing structure. An extention of this talent may be that we are constantly trying to organize the world. We take these patterned conjectures together with what we have been taught and unconsciously explore the deductive implications. For example, is the idea that everything is made of indivisable "atoms" something that comes by induction? Certainly not at first. We may feel that the logical implications are good and that the idea explains a lot long before we apply induction. Thinking about the implications of the symmetry of a die is an example.
Then there is also things like assuming that we have faculties that are nonphysical in nature such as an ability to dimly but directly see truth (intuit the Platonic realm which structures material reality). But I won't pursue this becuase it goes against my physicalist assumptions.
In the case of the die toss I feel like most of the rationality rested in deductive inferences based on our working assumptions about how dice work and on the physics etc.
---Even if that is so, how could
pure deduction account for "our working assumptions about how dice work" See what I'm saying? How can there be any account of cognition, and our understanding about the world, which does not acknowledge a role, and even a very important role, for inductive reasoning?
Well, personally, I do acknowledge a role but I am am trying to explore possiblities that would make Popper's extreme conclusions less insane. Furthermore, even though I acknowledge it I don't
not think it is the sole engine of rationality nor is it the sole or even main source of scientific progress.
Suppose we've just emerged from a dark cave, which we've lived in forever, and we've never seen a die. We emerge, and a die is produced with six sides. For whatever reason, we guess that it will only ever turn up a six when tossed. It is then tossed ten thousand times, and every time it turns up a six. Our hypothesis might not have been the result of an inductive inference, but it will be, I daresay, not possible for us to avoid having increasing confidence that our guess is something like a true statement about this one little aspect of the world. To what sort of reasoning, if not inductive, would you attribute our increasing confidence?
Hmmm. Suppose that nature simply programmed into us in a direct and brute force way the following instruction:
"
if something happens repeatedly, then experience confidence that it will happen again"
Does that mindlessly simple thermostat like subroutine count as reasoning? Sounds more like the car example to me.
Well, this is an interesting question. What do we actually mean by "inductively infer".
---Dude, you ought to write for FARMS or something! It's like "this depends on what we mean by terms like 'Native Americans', 'descendants', 'Israelites', 'ancestry'"...
Hey! Come on now. I think you would (or should agree) that in the context of philosophy, asking for clarification on the meaning of words is nothing like trying to use semantics to obfuscate or twist ordinary meanings about Native American etc. We are face to face with logic and metalogic and here we actually do need to question the precise meanings of our words.
Must the biological or mechanical system internally apply GOFAI symbolic manipulations and instantiate actual logical manuevers? Does a dog actually make probability calculations and make inferences that obey a consistent probability calculus?
---I think so, yeah. Why wouldn't it? How else would it survive? How else could it track, hunt, survive in a pack?
This is like Penrose asking how could an algorthmic mind come to see that something is true unless by internally instantiating a symbolic proof. Dennett's answer to this is instructive and might be a spiritual template for an answer to your question.
The whole this is here
http://ase.tufts.edu/cogstud/papers/penrose.htma snippet:
The argument Penrose unfolds has more facets than my summary can report, and it is unlikely that such an enterprise would succumb to a single, crashing oversight on the part of its creator--that the argument could be "refuted" by any simple objection. So I am reluctant to credit my observation that Penrose seems to make a fairly elementary error right at the beginning, and at any rate fails to notice or rebut what seems to me to be an obvious objection. Recall that the burden of the first part of the book is to establish that minds are not "algorithmic"--that there is something special that minds can do that cannot be done by any algorithm (i.e., computer program in the standard, Turing-machine sense). What minds can do, Penrose claims, is see or judge that certain mathematical propositions are true by "insight" rather than mechanical proof. And Penrose then goes to some length to argue that there could be no algorithm, or at any rate no practical algorithm, for insight.
But this ignores a possibility--an independently plausible possibility--that can be made obvious by a parallel argument. Chess is a finite game (since there are rules for terminating go-nowhere games as draws), so in principle there is an algorithm for either checkmate or a draw, one that follows the brute force procedure of tracing out the immense but finite decision tree for all possible games. This is surely not a practical algorithm, since the tree's branches outnumber the atoms in the universe. Probably there is no practical algorithm for checkmate. And yet programs--algorithms--that achieve checkmate with very impressive reliability in very short periods of time are abundant. The best of them will achieve checkmate almost always against almost any opponent, and the "almost" is sinking fast. You could safely bet your life, for instance, that the best of these programs would always beat me. But still there is no logical guarantee that the program will achieve checkmate, for it is not an algorithm for checkmate, but only an algorithm for playing legal chess--one of the many varieties of legal chess that does well in the most demanding environments. The following argument, then, is simply fallacious:
(1) X is superbly capable of achieving checkmate.
(2) There is no (practical) algorithm guaranteed to achieve checkmate.
therefore
(3) X does not owe its power to achieve checkmate to an algorithm.
So even if mathematicians are superb recognizers of mathematical truth, and even if there is no algorithm, practical or otherwise, for recognizing mathematical truth, it does not follow that the power of mathematicians to recognize mathematical truth is not entirely explicable in terms of their brains executing an algorithm. Not an algorithm for intuiting mathematical truth--we can suppose that Penrose has proved that there could be no such thing. What would the algorithm be for, then? Most plausibly it would be an algorithm--one of very many--for trying to stay alive, an algorithm that, by an extraordinarily convoluted and indirect generation of byproducts, "happened" to be a superb (but not foolproof) recognizer of friends, enemies, food, shelter, harbingers of spring, good arguments--and mathematical truths!
More later.