Part of a series on |
Epistemology |
---|
Core concepts |
Distinctions |
Schools of thought |
Topics and views |
Specialized domains of inquiry |
Notable epistemologists |
Related fields |
Part of a series on |
Charles Sanders Peirce |
---|
Philosophical |
Biographical |
Originally, fallibilism (from Medieval Latin: fallibilis, "liable to err") is the philosophical principle that propositions can be accepted even though they cannot be conclusively proven or justified,^{[1]}^{[2]} or that neither knowledge nor belief is certain.^{[3]} The term was coined in the late nineteenth century by the American philosopher Charles Sanders Peirce, as a response to foundationalism. Theorists, following Austrian-British philosopher Karl Popper, may also refer to fallibilism as the notion that knowledge might turn out to be false.^{[4]} Furthermore, fallibilism is said to imply corrigibilism, the principle that propositions are open to revision.^{[5]} Fallibilism is often juxtaposed with infallibilism.
According to philosopher Scott F. Aikin, fallibilism cannot properly function in the absence of infinite regress.^{[6]} The term, usually attributed to Pyrrhonist philosopher Agrippa, is argued to be the inevitable outcome of all human inquiry, since every proposition requires justification.^{[7]} Infinite regress, also represented within the regress argument, is closely related to the problem of the criterion and is a constituent of the Münchhausen trilemma. Illustrious examples regarding infinite regress are the cosmological argument, turtles all the way down, and the simulation hypothesis. Many philosophers struggle with the metaphysical implications that come along with infinite regress. For this reason, philosophers have gotten creative in their quest to circumvent it.
Somewhere along the seventeenth century, English philosopher Thomas Hobbes set forth the concept of "infinite progress". With this term, Hobbes had captured the human proclivity to strive for perfection.^{[8]} Philosophers like Gottfried Wilhelm Leibniz, Christian Wolff, and Immanuel Kant, would elaborate further on the concept. Kant even went on to speculate that immortal species should hypothetically be able to develop their capacities to perfection.^{[9]} This sentiment is still alive today. Infinite progress has been associated with concepts like science, religion, technology, economic growth, consumerism, and economic materialism. All these concepts thrive on the belief that they can carry on endlessly. Infinite progress has become the panacea to turn the vicious circles of infinite regress into virtuous circles. However, vicious circles have not yet been eliminated from the world; hyperinflation, the poverty trap, and debt accumulation for instance still occur.
Already in 350 B.C.E, Greek philosopher Aristotle made a distinction between potential and actual infinities. Based on his discourse, it can be said that actual infinities do not exist, because they are paradoxical. Aristotle deemed it impossible for humans to keep on adding members to finite sets indefinitely. It eventually led him to refute some of Zeno's paradoxes.^{[10]} Other relevant examples of potential infinities include Galileo's paradox and the paradox of Hilbert's hotel. The notion that infinite regress and infinite progress only manifest themselves potentially pertains to fallibilism. According to philosophy professor Elizabeth F. Cooke, fallibilism embraces uncertainty, and infinite regress and infinite progress are not unfortunate limitations on human cognition, but rather necessary antecedents for knowledge acquisition. They allow us to live functional and meaningful lives.^{[11]}
In the mid-twentieth century, several important philosophers began to critique the foundations of logical positivism. In his work The Logic of Scientific Discovery (1934), Karl Popper, the founder of critical rationalism, tried to solve the problem of induction by arguing for falsifiability as a means to devalue the verifiability criterion.^{[12]} He adamantly proclaimed that scientific truths are not inductively inferred from experience and conclusively verified by experimentation, but rather deduced from statements and justified by means of deliberation and intersubjective consensus within a particular scientific community. Popper also tried to resolve the problem of demarcation by asserting that all knowledge is fallible, except for knowledge that was acquired by means of falsification. Hence, Popperian falsifications are temporarily infallible, until they have been retracted by an adequate research community. Although critical rationalists dismiss the fact that all claims are fallible, they do belief that all claims are provisional.^{[13]} Counterintuitively, these provisional statements can become conclusive once logical contradictions have been turned into methodological refutations.^{[14]}^{[15]} The claim that all assertions are provisional and thus open to revision in light of new evidence is widely taken for granted in the natural sciences.^{[16]}
Popper insisted that verification and falsification are logically asymmetrical.^{[14]} However, according to the Duhem-Quine thesis, statements can neither be conclusively verified nor falsified in isolation from auxiliary assumptions (also called a bundle of hypotheses).^{[17]}^{[18]} As a consequence, statements are held to be underdetermined. Underdetermination explains how evidence available to us may be insufficient to justify our beliefs. The Duhem-Quine thesis should therefore erode our belief in logical falsifiability as well as in methodological falsification. The thesis can be contrasted with a more recent view posited by philosophy professor Albert Casullo, which holds that statements can be overdetermined. Overdetermination explains how evidence might be considered sufficient for justifying beliefs in absence of auxiliary assumptions.^{[19]} Philosopher Ray S. Percival holds that the Popperian asymmetry is an illusion, because in the action of falsifying an argument, scientists will inevitably verify its negation. Thus, verification and falsification are perfectly symmetrical.^{[20]}^{[21]} It seems, in the philosophy of logic, that neither syllogisms nor polysyllogisms will save underdetermination and overdetermination from the perils of infinite regress.^{[17]}^{[18]}
Furthermore, Popper defended his critical rationalism as a normative and methodological theory, that explains how objective, and thus mind-independent, knowledge ought to work.^{[22]} Hungarian philosopher Imre Lakatos built upon the theory by rephrasing the problem of demarcation as the problem of normative appraisal. Lakatos' and Popper's aims were alike, that is finding rules that could justify falsifications. However, Lakatos pointed out that critical rationalism only shows how theories can be falsified, but it omits how our belief in critical rationalism can itself be justified. The belief would require an inductively verified principle.^{[23]} When Lakatos urged Popper to admit that the falsification principle cannot be justified without embracing induction, Popper did not succumb.^{[24]} Lakatos' critical attitude towards rationalism has become emblematic for his so called critical fallibilism.^{[25]}^{[26]} While critical fallibilism strictly opposes dogmatism, critical rationalism is said to require a limited amount of dogmatism.^{[27]}^{[28]} Though, even Lakatos himself had been a critical rationalist in the past, when he took it upon himself to argue against the inductivist illusion that axioms can be justified by the truth of their consequences.^{[25]} In summary, despite Lakatos and Popper picking one stance over the other, both have oscillated between holding a critical attitude towards rationalism as well as fallibilism.^{[24]}^{[26]}^{[27]}^{[29]}
Fallibilism has also been employed by philosopher Willard V. O. Quine to attack, among other things, the distinction between analytic and synthetic statements.^{[30]} British philosopher Susan Haack, following Quine, has argued that the nature of fallibilism is often misunderstood, because people tend to confuse fallible propositions with fallible agents. She claims that logic is revisable, which means that analyticity does not exist and necessity (or a priority) does not extend to logical truths. She hereby opposes the conviction that propositions in logic are infallible, while agents can be fallible.^{[31]} Critical rationalist Hans Albert argues that it is impossible to prove any truth with certainty, not only in logic, but also in mathematics.^{[32]}
In Proofs and Refutations: The Logic of Mathematical Discovery (1976), philosopher Imre Lakatos implemented mathematical proofs into what he called Popperian "critical fallibilism".^{[33]} Lakatos's mathematical fallibilism is the general view that all mathematical theorems are falsifiable.^{[34]} Mathematical fallibilism deviates from traditional views held by philosophers like Hegel, Peirce, and Popper.^{[25]}^{[34]} Although Peirce introduced fallibilism, he seems to preclude the possibility of us being mistaken in our mathematical beliefs.^{[2]} Mathematical fallibilism appears to uphold that even though a mathematical conjecture cannot be proven true, we may consider some to be good approximations or estimations of the truth. This so called verisimilitude may provide us with consistency amidst an inherent incompleteness in mathematics.^{[35]} Mathematical fallibilism differs from quasi-empiricism, to the extent that the latter does not incorporate inductivism, a feature considered to be of vital importance to the foundations of set theory.^{[36]}
In the philosophy of mathematics, the central tenet of fallibilism is undecidability (which bears resemblance to the notion of isostheneia; the antithesis of appearance and judgement).^{[34]} Two distinct types of the word "undecidable" are currently being applied. The first one relates to the continuum hypothesis; the hypothesis that a statement can neither be proved nor be refuted in a specified deductive system.^{[37]} The continuum hypothesis was proposed by mathematician Georg Cantor in 1873.^{[38]} This type of undecidability is used in the context of the independence of the continuum hypothesis, namely because this statement is said to be independent from the axioms in Zermelo–Fraenkel set theory combined with the axiom of choice (also called ZFC). Both the hypothesis and its negation are consistent with these axioms. Many noteworthy discoveries have preceded the establishment of the continuum hypothesis.
In 1877, Cantor introduced the diagonal argument to prove that the cardinality of two finite sets is equal, by putting them into a one-to-one correspondence.^{[39]} Diagonalization reappeared in Cantors theorem, in 1891, to show that the power set of any countable set must have strictly higher cardinality.^{[40]} The existence of the power set was postulated in the axiom of power set; a vital part of Zermelo–Fraenkel set theory. Moreover, in 1899, Cantor's paradox was discovered. It postulates that there is no set of all cardinalities.^{[40]} Two years later, polymath Bertrand Russell would invalidate the existence of the universal set by pointing towards Russell's paradox, which implies that no set can contain itself as an element (or member). The universal set can be confuted by utilizing either the axiom schema of separation or the axiom of regularity.^{[41]} In contrast to the universal set, a power set does not contain itself. It was only after 1940 that mathematician Kurt Gödel showed, by applying inter alia the diagonal lemma, that the continuum hypothesis cannot be refuted,^{[37]} and after 1963, that fellow mathematician Paul Cohen revealed, through the method of forcing, that the continuum hypothesis cannot be proved either.^{[42]} In spite of the undecidability, both Gödel and Cohen suspected the continuum hypothesis to be false. This sense of suspicion, in conjunction with a firm belief in the consistency of ZFC, is in line with mathematical fallibilism.^{[43]} Mathematical fallibilists suppose that new axioms, for example the axiom of projective determinacy, might improve ZFC, but that these axioms will not allow for dependence of the continuum hypothesis.^{[44]}
The second type of undecidability is used in relation to computability theory (or recursion theory) and applies not solely to statements but specifically to decision problems; mathematical questions of decidability. An undecidable problem is a type of computational problem in which there are countably infinite sets of questions, each requiring an effective method to determine whether an output is either "yes or no" (or whether a statement is either "true or false"), but where there cannot be any computer program or Turing machine that will always provide the correct answer. Any program would occasionally give a wrong answer or run forever without giving any answer.^{[45]} Famous examples of undecidable problems are the halting problem and the Entscheidungsproblem. Conventionally, an undecidable problem is derived from a recursive set, formulated in undecidable language, and measured by the Turing degree.^{[46]}^{[47]} Practically all undecidable problems are unsolved, but not all unsolved problems are undecidable. Undecidability, with respect to computer science and mathematical logic, is also called unsolvability or non-computability. In the end, both types of undecidability can help to build a case for fallibilism, by providing these fundamental thought experiments.
Fallibilism improves upon the ideas associated with philosophical skepticism. According to philosophy professor Richard Feldman, nearly all versions of ancient and modern skepticism depend on the mistaken assumption that justification, and thus knowledge, requires conclusive evidence or certainty.^{[48]} An exception can be made for mitigated skepticism. In philosophical parlance, mitigated skepticism is an attitude which supports doubt in knowledge.^{[49]} This attitude is conserved in philosophical endeavors like scientific skepticism (or rational skepticism) and David Hume's inductive skepticism (or inductive fallibilism). Scientific skepticism questions the veracity of claims lacking empirical evidence, while inductive skepticism avers that inductive inference in forming predictions and generalizations cannot be conclusively justified or proven.^{[50]} Mitigated skepticism is also evident in the philosophical journey of Karl Popper.^{[34]} Furthermore, Popper demonstrates the value of fallibilism in his book The Open Society and Its Enemies (1945) by echoing the third maxim inscribed in the forecourt of the Temple of Apollo at Delphi: "surety brings ruin".^{[51]}
But the fallibility of our knowledge — or the thesis that all knowledge is guesswork, though some consists of guesses which have been most severely tested — must not be cited in support of scepticism or relativism. From the fact that we can err, and that a criterion of truth which might save us from error does not exist, it does not follow that the choice between theories is arbitrary, or non-rational: that we cannot learn, or get nearer to the truth: that our knowledge cannot grow.
— Karl Popper
Fallibilism differs slightly from academic skepticism (also called global skepticism, absolute skepticism, universal skepticism, radical skepticism, or epistemological nihilism) in the sense that fallibilists assume that no beliefs are certain (not even when established a priori), while proponents of academic skepticism advocate that no beliefs exist. In order to defend their position, these skeptics will either engage in epochē, a suspension of judgement, or they will resort to acatalepsy, a rejection of all knowledge.^{[52]} The concept of epoché is often accredited to Pyrrhonian skepticism, while the concept of acatalepsy can be traced back to multiple branches of skepticism. Acatalepsy is also closely related to the Socratic paradox. Nonetheless, epoché and acatalepsy are respectively self-contradictory and self-refuting, namely because both concepts rely (be it logically or methodologically) on its existence to serve as a justification.^{[53]} Lastly, local skepticism is the view that people cannot obtain knowledge of a particular area or subject (e.g. morality, religion, or metaphysics).
Nearly all philosophers today are fallibilists in some sense of the term.^{[3]} Few would claim that knowledge requires absolute certainty, or deny that scientific claims are revisable, though in the 21st century some philosophers have argued for some version of infallibilist knowledge.^{[54]}^{[55]}^{[56]} Historically, many Western philosophers from Plato to Saint Augustine to René Descartes have argued that some human beliefs are infallibly known. Plausible candidates for infallible beliefs include logical truths ("Either Jones is a Democrat or Jones is not a Democrat"), immediate appearances ("It seems that I see a patch of blue"), and incorrigible beliefs (i.e., beliefs that are true in virtue of being believed, such as Descartes' "I think, therefore I am"). Many others, however, have taken even these types of beliefs to be fallible.^{[31]}
Popperian critical fallibilism takes the infinite regress in proofs and definitions seriously, does not have illusions about 'stopping' them, accepts the sceptic criticism of any infallible truth-injection.However, Lakatos' interpretation of Popper was not equivalent to Popper's philosophy: Ravn, Ole; Skovsmose, Ole (2019). "Mathematics as Dialogue". Connecting Humans to Equations: A Reinterpretation of the Philosophy of Mathematics. History of Mathematics Education. Cham: Springer-Verlag. pp. 107–119 (110). doi:10.1007/978-3-030-01337-0_8. ISBN 9783030013363.
Lakatos also refers to the scepticist programme as a 'Popperian critical fallibilism.' However, we find that this labelling could be a bit misleading as the programme includes a good deal of Lakatos' own philosophy.