I. A (Mostly) Final Word on Hume.
In Chapter 4, I crammed in some intellectual history relevant to our endeavor, with a brief introduction to Laplace’s works and his use of probability theory and induction, and a closer look at David Hume’s philosophical “problem” with the uncertainty that inheres in all non-demonstrative reasoning. Let me be blunt: “Hume’s problem” is a philosophical “problem” only because of its denial of common sense and of the obvious scientific and technological progress that has attended humanity’s steady march from the spear, to the bow-and-arrow, to the trebuchet, and eventually to satellites and supercomputers, but philosophers and academics are much troubled by, and also secretly love, “Hume’s Problem” because it contains a seemingly insoluble problem. In fact, the more I read serious academics masticating on it like a dog on a titanium bone, I am reminded of an old George Carlin bit from his “Occupation: Fool” album of the 70s, when he relates how as a teenage boy attending Catholic school that he and his friends would “save up their best questions for Father Murphy’s class.” To wit:
“Fawtha, if God is all powerful, can he create a rock so heavy that he himself cannot lift it?!” Followed by raucous Irish laughter and “Got ya theah, Fawtha!”1
When a person says something like this…
The intense view of these manifold contradictions and imperfections in human reason has so wrought upon me, and heated my brain, that I am ready to reject all belief and reasoning, and can look upon no opinion even as more probable or likely than another [Treatise, 1.4.7.8].
…you know they’re (a) having a crisis, and (b) almost certainly an academic. Because no one with a functioning cerebral cortex could possibly have lived past childhood if they genuinely believed this.2 Learning to walk is very much an inductive endeavor, it just happens sub rosa, without any kind of internal dialogue to accompany it… as opposed to, say, learning philosophy at university.
You find out very early in life exactly how relevant and conditional all propositions are and that they have differing certitudes, i.e. probabilities, upon which you act routinely. Is it more or less likely you will break something in your body if you jump from an apple tree? The immediate question is, “how high is the branch?” That’s an inductive answer all the way around, yet no one seriously questions its reliability. You didn’t need Newton to know with sufficient certainty that there is a remorseless, indifferent force that will take you from that tree to that ground regardless of what you might postulate about it. The same is true of checking the sky to decide if you should bring an umbrella or wear a jacket. That is purely inductive, yet if taken seriously, Hume tells us that we can glean nothing at all about the weather, or falling from trees, from all of our prior encounters with it. No rational inference at all from the observed to the unobserved. In Hume’s original formulation, he considered this mere “custom.”
Question: If David Hume went to a barber and got a bad haircut, did he continue to go back regardless? Likewise, if Hume went to a restaurant with bad service and ate terrible food that made him sick, would his philosophy require him to return ad infinitum because - according to him - one can draw no inferences whatsoever from that single event to future similar events? (See what I mean now?)

And for those who think I might be pleading a special case by using examples of physical “laws” or “facts,” let me offer a non-physical example that can be reduced to an “If P, then Q” formulation.
“If I ask Dad and he says no, how likely is that no to hold even in the face of with additional evidence I provide” vs. “If Mom says no, how likely is that no to hold even in the face of additional evidence (as in, my older sister’s earnest support.)” I promise you that before I, or you, or anyone, could even think in terms of “reasoning” and “rationale”, we had a very refined sense of the probabilistic outcomes of those two propositions - not as certain as our ones about falling or gas station sushi, for sure - but any child who has ever wanted anything knows this, too, even if they don’t have the language for it.
Now this might seem like quibbling, or that I’m just an arse (which is true regardless), so to put the best face on Hume’s argument, I point readers to Colin Howson’s comprehensive treatment, “Hume’s Problem: Induction and the Justification of Belief,” by Oxford University Press, 2003. Howson goes to lengths to defend Hume’s argument, crediting its durability to its strength, even while acknowledging its problem with science, progress, and the inductive and circularity arguments against it.3 At the end of this opening analysis, though, Howson’s work points to what I consider to be perhaps a reconciliation of our views: Hume’s answer to his critics (of which he was very well aware in his lifetime) was, according to Howson, “to explain where he could not justify, in this case in the apparent universal psychological propensity to induce, and to explain it in terms of inborn characteristics. …He called it, with remarkable prescience, Instinct[.]” Howson, p. 20. (emphasis in original).
I have said almost the same exact thing in the paragraph above, by way of example, in defense of induction. How does one resolve this apparent contradiction wherein I find defense for induction in the very same fact/observation that Hume believes means it has no possible justification? The answer lies in subsequent developments in mathematics, specifically, George Polya’s “Mathematics and Plausible Reasoning,” upon which I am relying for much of this course material, along with R.T. Cox’s work that produced the eponymous “Cox’s Theorem,” (1946) wherein Cox saw the mathematical significance in Polya’s brilliant 2 qualitative volumes. E.T. Jaynes spells this out in his “Probability Theory: The Logic of Science,” (PT: TLOS, hereafter), which itself a work on par with Newton’s Principia, and in the same intellectual family tree as Laplace’s “Celestial Mechanics.”
The present form of this work is the result of an evolutionary growth over many years… The actual writing started as notes for a series of lectures given at Stanford University in 1956, expounding the then and new exciting work of George Polya on ‘Mathematics and Plausible Reasoning’. He dissected out intuitive ‘common sense’ into a set of elementary qualitative desiderata and showed that mathematicians had been using them all along to guide the early stages of discovery, which necessarily precede the finding of a rigorous proof…
Polya demonstrated this qualitative agreement in such complete, exhaustive detail as to suggest that there must be more to it. Fortunately, the consistency theorems of R.T. Cox were enough to clinch matters; when one added Polya’s qualitative conditions to them the result was a proof that, if degrees of plausibility are represented by real numbers, then there is a uniquely determined set of quantitative rules for conducting inference.
PT: TLOS, E.T. Jaynes, Preface pp. xix-xx.
What Cox’s work shows is that deductive, or demonstrative reasoning is itself a special case of induction, when the conclusions flow from the premises with certainty - at the endpoints of [0, 1] when we have only zeroes and ones. Everything else in life, at least by probability ranking, falls on the number line in between the 1 and 0.4
P. S. Laplace, George Polya, Sir Harold Jeffries, R.T. Cox, and the great Edwin T. Jaynes comprise the brightest stars among a pantheon of scientist-philosophers who have all pointed out - in different ways in their respective works - that demonstrative reasoning (deductive reasoning) only exists for propositions and matters (“facts”) that have finally become over time… well, demonstrable. The corollary to this is that all of human knowledge - even the deductive/demonstrative kind that Hume counts as the only things we can know with certainty - is and was arrived at inductively in the first instance. Prof. Polya repeatedly uses examples from mathematics to explicate this for some of Euler’s greatest works, as well as subsequent conjectures that still haven’t been confirmed.5
In later chapters, we will look more closely at some of the ways plausible reasoning helps us - lawyers - to maneuver propositions and ideas from being merely speculative to eventually being incontrovertible. Understand (please) that “moving propositions” from being initially “speculative” to being “plausible” and thence on to be certain “enough” to win a verdict is exactly what the law is about - it is the very heart and soul of what a trial is for, and what a good trial lawyer must know how to do. Any belief in the right to a trial by jury necessarily implies that there are underlying, universal modes of reasoning, along with relevant local and even larger cultural context, which make it essential to preserving a person’s right to defend themselves against possible wrongful prosecution to have a “jury of peers” to whom they can “try their cause.” For now, however, we’ll (kinda) leave Hume and his problem with induction so we can finish our intellectual history to the present.
II. Popper, Fisher, and Scientific Irrationalism: David Stove’s Ghost
David Hume might be an interesting historical footnote if it weren’t for two figures (primarily)6 in the early-1900’s who not only breathed new life into inductive skepticism, but gave it a new Jazz Age twist: Karl Popper and statistician Ronald Fisher. I’m not saying that about Hume to be flippant - but as a matter of history, Laplace, Lagrange, Euler, Leibniz, and many, many other scientists had been continuing to succeed rather spectacularly via the inductive route all during the 18th and 19th centuries, and into the beginning of the 20th, notwithstanding the objections of philosophes like Hume.
Sir Karl Popper’s dominance over his fellow academics, however, along with the concurrent rise of Sir Ronald Fisher in the area of statistics and eugenics, that ultimately resurrected, and reified, “Hume’s Problem” into science via academic science and the philosophy of science. Popper’s conception of science as being nothing more than the “falsification” of propositions, his deductivist need for certainty or nothing, found in Ronald Fisher’s p-values a way to deny meaning to the very thing that constitutes validation for scientists: a hypothesis’ predictive power, given in probabilistic terms. Australian David Stove has written the definitive work on the rise of Karl Popper’s philosophy of science and Popper’s intellectual heirs, Lakatos, Feyerabend, and Thomas Kuhn, whose work “The Structure of Scientific Revolutions” is, unfortunately, the most cited book - period, of all books - in the entire 20th century.7 In “Scientific Irrationalism: Origins of a Post Modern Cult,” BSB 2024, (kinda gives a lot away in the title, eh?) the Aussie Stove takes a rhetorical hatchet to the men he dubs the “Irrationalists,” and while he’s at it, just for good measure, he provides a symbolic-logic style demonstrative proof refuting Hume’s inductive skepticism. And if the book is too much for you (it’s not an easy or quick read), then Keith Windschuttle’s 25 page Foreword lays it all out in advance: how humanities majors, statisticians, and philosophers, through university “Science Studies” departments, came to dominate the entire discussion, and even the mathematics, around the philosophy of science, without actual scientists being consulted (of course).8
Popper was not the only one, however. He was aided in no small part by the noted (and avid) eugenicist and English mathematician, Sir Ronald Fisher. Now if this seems my trying to unfairly smear the man’s name and/or theories by associating him with the eugenics movement of the mid-1920s and 1930s, I have two points in defense. First, I point to Aubrey Clayton’s answer to that charge in his book, “Bernoulli’s Fallacy,” which is basically Clayton’s name for “Hume’s Problem” as it manifested in the area of mathematics. In describing Fisher and Pearson’s ties to the eugenics movement, Clayton notes:
Pearson and Fisher were also devotees to the cause of eugenics and used their newly minted statistical tools with great success to support the eugenics agenda… the proper function of of statistics was often to detect significant differences between the races… “Objective” assessments of this kind were used to support discriminatory immigration policies, forced sterilization laws, and, in their natural logical extension, the murder of millions in the Holocaust…
… It may seem that… I am unfairly judging people who lived over a century ago by modern standards… [b] that’s not my goal… [M]y intention is not to dismiss the works of these statisticians simply because they were also eugenicists…
The main reason this matters when it comes to Galton, Pearson, and Fisher, is that their heinous attitudes were not at all incident to their work. .. eugenicist ideas animated their entire intellectual projects[.]9
Mr. Clayton then does a complete book-length history and refutation of what he calls “Bernoulli’s Fallacy,” which encompasses the same intellectual strain of deductivism - “objectivity” was their own word - in mathematics that Hume had made his name on almost two centuries earlier (N.B. Bernoulli’s Ars Conjectandi was published posthumously in 1713 and that presented some of the same arguments). This strain of thought in mathematics also goes by the name frequentist statistics, in large part because of Continental haute couture’s fascination with so-called “games of chance,” like cards, dice, and other similarly inane ways of losing your money. We will discuss the philosophical underpinnings of these games of “chance,” and “random” experiments, and how these notions infect clear-thinking about both ordinary events and science, in the next chapter.
Statisticians will undoubtedly be familiar with the rivalry between Fisher and Neyman-Pearson statistics, the dueling works of significance testing and parameter estimation between Ronald Fisher vs. Jerzy Neyman and Egon Pearson. Fisher’s public attacks on other scientists and mathematicians with different theories of statistical interpretation are, ahem, rather well-documented in the literature.
R.A. Fisher’s Statistical Methods for Research Workers (1925) was the most influential of these [statistical] cookbooks. In going through 13 editions in the period 1925-1960 it acquired such an authority over scientific practice that researchers in some fields such as medical testing found it impossible to get their work published if they failed to follow Fisher’s recipes to the letter.
Fisher’s later dominance of the field derives less from his technical work than from his flamboyant personal style and the worldly power that went with his official position, in charge of the work and destinies of many students and subordinates. For 14 years (1919-1933) he was at the Rothamstead agricultural research facility with an increasing number of assistants and visiting students, then holder of the Chair of Eugenics at University College, London, and finally in 1943 Balfour Professor of Genetics at Cambridge.
…they adopted a militant attitude, each defending his own little bailiwick against intrusion and opposing every attempt to find the missing unifying principles of inference. R. A. Fisher (1956) and M. G. Kendall (1963) attacked Neyman and Ward for seeking unifying principles in decision theory. R.A. Fisher (in numerous articles, e.g. 1933)… accused Laplace and Jeffreys of committing metaphysical nonsense for thinking that probability theory was an extension of logic[.]
Jaynes, PT: TLOS, pp. 492-94.
Aubrey Clayton asserts that Fisher’s dismissiveness toward Laplace’s views of probability were a product of his time as an undergraduate student at Gonville and Caius College where John Venn (yes, he did more than just diagrams!) was the President. See Bernoulli’s Fallacy, pp. 29-31, Notes 11 -12. Venn was openly dismissive of Laplace’s views of probability in his major work. See John Venn, The Logic of Chance, pp. 95-96.10 No one seems to disagree with the fact that Fisher publicly insulted contemporaries and attacked them rather than raising any substantive objections to their work.
Which brings me to my second “defense” - if it can be called that - regarding my use of the eugenicist label on Ronald Fisher. This is entirely my own and should not in any way be attributed to any of the eminent people I have cited or quoted anywhere. But since I lack any pretense at eminence, here it goes: Ronald Fisher was a giant Asshole, with a capital “A” for his varsity letter. This may seem unnecessarily coarse or irrelevant, but let me offer that this is a recurring theme in bad science and should not be ignored because of its impolity: a lot of bad science and bad ideas get accepted simply by bluster, force of personality, rather than substance. E.T. Jaynes points this out remorselessly regarding the treatment of physicist Harold Jeffreys, to whom his work is dedicated, and he describes Jeffreys in some ways as a casualty of a kind of accepted professional malfeasance, by academics who achieved fame through media and acclaim, rather than on merit.
III. Practical Application.
Perhaps none of the above will have convinced you, the reader, of anything at all, one way or the other on this issue. If this is the case so far - then that’s fair enough. For lawyers, there could even be a certain “so what” about academic backstabbing over philosophy of science… except for how this manifests in the law. Some of the worst outrages and instances of nonsense science, or scientific fraud and misconduct, or scientific ignorance, have occurred in courtrooms at the expense of innocent people. I’ve already mentioned Sally Clark’s case, of being wrongfully convicted for her two sons’ deaths over this exact misunderstanding, but that pales compared to the Salem Witch Trials (1692-93), which largely relied upon courts that allowed “spectral evidence” to convict some 30 innocent people of being in league with Satan. That kind of “evidence” justified execution by hanging in 19 cases, and in one case, Giles Corey was crushed to death with rocks. i.e. Slowly suffocated over the course of 3 days for his refusal to enter a plea and/or admit under torture that he was in league with the devil, which according to Wikipedia:
Because Corey refused to enter a plea, his estate passed on to his sons instead of being seized by the Massachusetts colonial government.11
Five more people also died in the jails awaiting trial.
And this doesn’t even begin to address misconduct at state and federal forensic laboratories - of the kind upon which thousands of drug convictions were based - 38,000, or so, to be mostly precise. That was just in Massachusetts, plus the 1,000 or so in Colorado. What’s a few thousand innocent people’s lives in service to science? Then there’s this:
U.S. Navy sailor Keith Allen Harward, 60, who spent 33 years in prison after his conviction for the murder of a Virginia man and the rape of the man’s wife a few blocks from the Newport News naval shipyard in 1982. Key to his conviction was the testimony of so-called forensic bite mark experts. Lowell J. Levine told a jury that there was “a very, very, very high degree of probability” that Harward’s teeth left a bite mark on the wife’s leg. Another expert, Alvin G. Kagey, also linked the bite mark to Harward, testifying “with all medical certainty” that “there is just not anyone else that would have this unique dentition.” That testimony and the fact that the wife had noticed her assailant was wearing a military uniform were enough to convict Harward.
It turns out that someone else did have the “unique dentition” left on the victim’s leg. DNA evidence not only exonerated Harward, but it also revealed the actual perpetrator was Jerry Crotty. He died in an Ohio prison in 2006 while serving time for an abduction. Crotty served aboard the U.S.S. Carl Vinson with Harward in 1982. On April 7, 2016, the Virginia Supreme Court issued a writ of actual innocence declaring Harward innocent of the crimes.
“Mr. Harward is at least the 25th person to have been wrongly convicted or indicted based on discredited bite mark evidence,” according to Chris Fabricant, Director of Strategic Litigation for the Innocence Project, an organization affiliated with the Benjamin N. Cardozo School of Law at Yeshiva University.
People have been pointing this out for a long time.
On February 2, 2016, a Massachusetts court vacated the conviction of George Perrot for a 1992 rape and burglary after finding the conviction was based upon an FBI expert’s erroneously overstated hair analysis. The 79-page opinion marked the first time a court conducted a thorough review of the science of microscopic hair comparison. The court conducted a two-day hearing during which it heard testimony from multiple defense and prosecution experts.
“The decision is vitally important because it will be followed by many other courts around the country which will have to decide how to deal with this erroneous testimony,” according to Fabricant. “While we don’t know how many cases may ultimately be reversed because of the use of this scientifically invalid evidence, we know from the preliminary findings of the review that FBI agents, over a period of more than two decades, erroneously testified or provided erroneous reports in more than 957 of the cases where microscopic hair analysis was used to connect a defendant to a crime.”
IV. Takeaways.
Most of this is really just to set out the respective positions of the dueling sides this Great Schism in Science and the Philosophy of Science and since I’m a lawyer, I’ve made no pretense of being fair to the “other” side in the debate. I’m an advocate and, besides, that famed and feigned academic indifference is largely just that. In sum, what is (perhaps solely) necessary to understand about this intellectual history is that there has existed for some time, among both the ancients and the more modern thinkers, a split in views regarding a number of critical and related issues in philosophy, mathematics, science, and the philosophy of science about:
Deduction vs. Induction as vehicles for ascertaining the “Truth” of propositions;
The Probability of Propositions vs. the Probability of Data;
Falsification as science vs. Predictive strength of models as science;
Frequentist statistics vs. Bayesian statistics;
Ontological probability (that objects possess probability) vs. epistemic probability (that probability is a measure of our knowledge and ignorance); and ultimately, most generally,
Certainty vs. the Management of Uncertainty in regards to the world around us.
This last one is the issue of concern for trials, and to which we will turn ourselves next Chapter. How we manage uncertainty in the law in order to arrive at just results. And ensure that when we’re looking at “probabilistic” inferences, and/or scientific evidence that we are not committing the “Prosecutor’s Fallacy.” Any math should not be that hard and I won’t go fast, I promise. If it seems fast, that’s you reading too quickly.12
I recognize that this bashing of the great Hume will make many who swim in the ocean of ideas uncomfortable, but understand that even the greatest thinkers of any age - including Newton himself - also espoused some fairly crazy shit. Newton, who gave us likely the greatest scientific model thus far in human history, after the Principia spent most of his life trying to turn lead into gold - alchemy - to discover the Philosopher’s stone. He wrote a lot of barely coherent prose and suffered a nervous breakdown during this period. No one gets a pass.
Again, lest this seem like just the author’s own rhetorical bomb-throwing, I’ll lean on two with considerably better intellectual reputations: “Bertrand Russell, for example, expressed the view that if Hume’s problem cannot be solved, ‘there is no intellectual difference between sanity and insanity’ (Russell 1946: 699).” The Stanford Encyclopedia of Philosophy: The Problem of Induction. https://plato.stanford.edu/entries/induction-problem/
See, Howson, pp. 10-19.
R.T. Cox devoted the final section of his update to his theorem, “The Algebra of Probable Inference,” Johns Hopkins Press, 1961, to answering Hume specifically. See pp. 91-97.
See Mathematics and Plausible Reasoning, Vol: 1, George Polya pp.
A lot was happening in the early 1900’s, a good bit of it quite mad, as we shall see.
Scientific Irrationalism: Origins of a Post Modern Cult, David Stove, 2024 (BSB Books), Foreword, p. x, by Keith Windschuttle.
Against the Idols of the Age, a collection of Stove’s essays edited by Roger Kimball, contains a withering history of how Popper came about his philosophy of science and Stove’s analysis of it. See “Cole Porter and Karl Popper: the Jazz Age in the Philosophy of Science,” pp. 3-7.
Aubrey Clayton, Bernoulli’s Fallacy: Statistical Illogic and the Crisis of Modern Science, pp. 14-15.
Start with the paragraph that contains Venn’s quotation of Laplace and then what follows.
If he were a Catholic and I were Pope, he would be known as Saint Giles Corey.
This may be literal as well as metaphorical. My publishing schedule may be reduced to perhaps 3 per month because of other, ongoing (professional) writing commitments the next few months that pays my bills.
I'm enjoying these legal examples. Keep 'em coming!