In Defense of Probability (1985) [pdf]
ijcai.orgIf you are interested in a deeper dive into the topics covered in this paper I would suggest "Probability Theory: The Logic of Science" by E. T. Jaynes
available athttp://www-biba.inrialpes.fr/Jaynes/cpreambl.pdf
It is probably (ha!) the most enlightening math book I have ever read.
A more balanced, current treatment is in Bradley Efron and Trevor Hastie's Computer Age Statistical Inference (2016) [1]
It's a very good book. Just be aware that it is very one-sided in favor of the Bayesian view.
That's what makes it a very good book!
There's a higher quality PDF available [here](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.13....), it has a credit footnote: "Converted to electronic version by: Roby Joehanes, Kansas State University."
That person already did the OCR cleaning (and the fonts are nicer).
Its ironic, today we could really use a paper "in defense of fuzzy logic"
Having used a fuzzy logic model to do some analysis on a social project in the 90's and a heating problem around the same time frame, I don't think fuzzy logic needs a defense, it needs a PR agency.
What is the difference between fuzzy logic and a Bayesian interpretation of probability?
Bayesian probability addresses uncertainty about whether an event will/did occur, but treats the actual occurrence of the event as a boolean value of {0, 1}.
Fuzzy logic recognizes degrees of occurrence, represented by any number between 0 and 1. Its proponents contend that this allows richer abstraction of the ways that humans actually perceive and reason about the world. In their view, probability is just a special case of fuzzy logic, using artificially restricted truth values.
If you're interested, check out Kosko's "Fuzziness vs. probability." International Journal of General System 17.2-3 (1990): 211-240 http://sipi.usc.edu/~kosko/Fuzziness_Vs_Probability.pdf
I see, that makes sense.
This region is 30% covered vs. there is a 30% chance this region is (read: fully) covered.
I remember hearing this is the correct way to interpret some weather forecasts. 90% rain could mean certain rain over 90% of the area, rather than a 9/10 chance of rain over all the area
This is correct, or at least was as of 25 years ago when the question came up in conversation with a professor of mine, I asked on Usenet, and got a response from someone who worked for the National Weather Service.
Apparently they had specifically debated about whether to weight it on population density. In other words should they try to make it be so that a 90% chance of rain means that 90% of the people listening to the forecast encounter rain? They decided not to on the principle that people do not care equally about forecasts. Farmers care more than city dwellers, and city dwellers who care about forecasts pay more attention when they are deciding whether to take a trip to the country. Sorting out all of these issues to produce a perfect forecast for the audience was a can of worms that they decided not to dive into, and instead they kept to the simple, unambiguous and easily measured percentage of land area.
I could add that truth-likeness might not adequately accommodated by a subjective-probability degree-of-belief model. For example, while I might believe both of these propositions to be true, p=1, one is more uhh truth like:
Donald Trump weighs 107 kg. Donald Trump weighs less than the sun.
I suspect that the author's assertion that the degree of belief model is sufficient to account for vagueness in propositions cannot really be defended. Its also obvious the author is not a logician.
Why is there a glaring typo in the first sentence of the abstract?
"In this paper, it is argued that probability theory, when used correctly, is suffrcient for the task of reasoning under uncertainty."
The paper is probably OCRed.
Good point. The rendering of "sufficient" probably uses the triple-ligature for "ffi", where all 3 characters are merged.
The author, Peter Cheeseman, is a reputable figure and a solid practitioner. I'm sure he's just as annoyed that, on the second page, his own name is misspelled as "Cheesemart" ;-).
People may recognize him as a co-creator of the AutoClass software for Bayesian clustering, which was very popular in the late 1990s - the explanatory paper has 1700 citations.
We can fix it. I have started a git repo for this https://github.com/michaelchristophernewyork/probability
please help clean up if you have time.
Author's name is also misspelled in the header on one of the early pages. It's obviously been scanned and OCRed.
OCR has come a long way, but still has lots to improve.