Katana VentraIP

Falsifiability

Falsifiability (or refutability) is a deductive standard of evaluation of scientific theories and hypotheses, introduced by the philosopher of science Karl Popper in his book The Logic of Scientific Discovery (1934).[B] A theory or hypothesis is falsifiable (or refutable) if it can be logically contradicted by an empirical test.

Popper emphasized the asymmetry created by the relation of a universal law with basic observation statements[C] and contrasted falsifiability to the intuitively similar concept of verifiability that was then current in logical positivism. He argued that the only way to verify a claim such as "All swans are white" would be if one could theoretically observe all swans,[D] which is not possible. On the other hand, the falsifiability requirement for an anomalous instance, such as the observation of a single black swan, is theoretically reasonable and sufficient to logically falsify the claim.


Popper proposed falsifiability as the cornerstone solution to both the problem of induction and the problem of demarcation. He insisted that, as a logical criterion, his falsifiability is distinct from the related concept "capacity to be proven wrong" discussed in Lakatos's falsificationism.[E][F][G] Even being a logical criterion, its purpose is to make the theory predictive and testable, and thus useful in practice.


By contrast, the Duhem–Quine thesis says that definitive experimental falsifications are impossible[1] and that no scientific hypothesis is by itself capable of making predictions, because an empirical test of the hypothesis requires one or more background assumptions.[2]


Popper's response is that falsifiability does not have the Duhem problem[H] because it is a logical criterion. Experimental research has the Duhem problem and other problems, such as the problem of induction,[I] but, according to Popper, statistical tests, which are only possible when a theory is falsifiable, can still be useful within a critical discussion.


As a key notion in the separation of science from non-science and pseudoscience, falsifiability has featured prominently in many scientific controversies and applications, even being used as legal precedent.

The elusive distinction between the logic of science and its applied methodology[edit]

Popper distinguished between the logic of science and its applied methodology.[E] For example, the falsifiability of Newton's law of gravitation, as defined by Popper, depends purely on the logical relation it has with a statement such as "The brick fell upwards when released".[20][T] A brick that falls upwards would not alone falsify Newton's law of gravitation. The capacity to verify the absence of conditions such as a hidden string[U] attached to the brick is also needed for this state of affairs[A] to eventually falsify Newton's law of gravitation. However, these applied methodological considerations are irrelevant in falsifiability, because it is a logical criterion. The empirical requirement on the potential falsifier, also called the material requirement,[V] is only that it is observable inter-subjectively with existing technologies. There is no requirement that the potential falsifier can actually show the law to be false. The purely logical contradiction, together with the material requirement, are sufficient. The logical part consists of theories, statements, and their purely logical relationship together with this material requirement, which is needed for a connection with the methodological part.


The methodological part consists, in Popper's view, of informal rules, which are used to guess theories, accept observation statements as factual, etc. These include statistical tests: Popper is aware that observation statements are accepted with the help of statistical methods and that these involve methodological decisions.[21] When this distinction is applied to the term "falsifiability", it corresponds to a distinction between two completely different meanings of the term. The same is true for the term "falsifiable". Popper said that he only uses "falsifiability" or "falsifiable" in reference to the logical side and that, when he refers to the methodological side, he speaks instead of "falsification" and its problems.[F]


Popper said that methodological problems require proposing methodological rules. For example, one such rule is that, if one refuses to go along with falsifications, then one has retired oneself from the game of science.[22] The logical side does not have such methodological problems, in particular with regard to the falsifiability of a theory, because basic statements are not required to be possible. Methodological rules are only needed in the context of actual falsifications.


So observations have two purposes in Popper's view. On the methodological side, observations can be used to show that a law is false, which Popper calls falsification. On the logical side, observations, which are purely logical constructions, do not show a law to be false, but contradict a law to show its falsifiability. Unlike falsifications and free from the problems of falsification, these contradictions establish the value of the law, which may eventually be corroborated.


Popper wrote that an entire literature exists because this distinction between the logical aspect and the methodological aspect was not observed.[G] This is still seen in a more recent literature. For example, in their 2019 article Evidence based medicine as science, Vere and Gibson wrote "[falsifiability has] been considered problematic because theories are not simply tested through falsification but in conjunction with auxiliary assumptions and background knowledge."[23] Despite the fact that Popper insisted that he is aware that falsifications are impossible and added that this is not an issue for his falsifiability criterion because it has nothing to do with the possibility or impossibility of falsifications,[F] Stove and others, often referring to Lakatos original criticism, continue to maintain that the problems of falsification are a failure of falsifiability.[24]

Basic statements and the definition of falsifiability[edit]

Basic statements[edit]

In Popper's view of science, statements of observation can be analyzed within a logical structure independently of any factual observations.[W][X] The set of all purely logical observations that are considered constitutes the empirical basis. Popper calls them the basic statements or test statements. They are the statements that can be used to show the falsifiability of a theory. Popper says that basic statements do not have to be possible in practice. It is sufficient that they are accepted by convention as belonging to the empirical language, a language that allows intersubjective verifiability: "they must be testable by intersubjective observation (the material requirement)".[25][Y] See the examples in section § Examples of demarcation and applications.


In more than twelve pages of The Logic of Scientific Discovery,[26] Popper discusses informally which statements among those that are considered in the logical structure are basic statements. A logical structure uses universal classes to define laws. For example, in the law "all swans are white" the concept of swans is a universal class. It corresponds to a set of properties that every swan must have. It is not restricted to the swans that exist, existed or will exist. Informally, a basic statement is simply a statement that concerns only a finite number of specific instances in universal classes. In particular, an existential statement such as "there exists a black swan" is not a basic statement, because it is not specific about the instance. On the other hand, "this swan here is black" is a basic statement. Popper says that it is a singular existential statement or simply a singular statement. So, basic statements are singular (existential) statements.

The definition of falsifiability[edit]

Thornton says that basic statements are statements that correspond to particular "observation-reports". He then gives Popper's definition of falsifiability:

It is guided by natural law;

It has to be explanatory by reference to natural law;

It is testable against the empirical world;

Its conclusions are tentative, i.e., are not necessarily the final word; and

It is falsifiable.

Connections between statistical theories and falsifiability[edit]

Considering the specific detection procedure that was used in the neutrino experiment, without mentioning its probabilistic aspect, Popper wrote "it provided a test of the much more significant falsifiable theory that such emitted neutrinos could be trapped in a certain way". In this manner, in his discussion of the neutrino experiment, Popper did not raise at all the probabilistic aspect of the experiment.[43] Together with Maxwell, who raised the problems of falsification in the experiment,[42] he was aware that some convention must be adopted to fix what it means to detect or not a neutrino in this probabilistic context. This is the third kind of decisions mentioned by Lakatos.[52] For Popper and most philosophers, observations are theory impregnated. In this example, the theory that impregnates observations (and justifies that we conventionally accept the potential falsifier "no neutrino was detected") is statistical. In statistical language, the potential falsifier that can be statistically accepted (not rejected to say it more correctly) is typically the null hypothesis, as understood even in popular accounts on falsifiability.[53][54][55]


Different ways are used by statisticians to draw conclusions about hypotheses on the basis of available evidence. Fisher, Neyman and Pearson proposed approaches that require no prior probabilities on the hypotheses that are being studied. In contrast, Bayesian inference emphasizes the importance of prior probabilities.[56] But, as far as falsification as a yes/no procedure in Popper's methodology is concerned, any approach that provides a way to accept or not a potential falsifier can be used, including approaches that use Bayes' theorem and estimations of prior probabilities that are made using critical discussions and reasonable assumptions taken from the background knowledge.[AX] There is no general rule that considers as falsified an hypothesis with small Bayesian revised probability, because as pointed out by Mayo and argued before by Popper, the individual outcomes described in detail will easily have very small probabilities under available evidence without being genuine anomalies.[57] Nevertheless, Mayo adds, "they can indirectly falsify hypotheses by adding a methodological falsification rule".[57] In general, Bayesian statistic can play a role in critical rationalism in the context of inductive logic,[58] which is said to be inductive because implications are generalized to conditional probabilities.[59] According to Popper and other philosophers such as Colin Howson, Hume's argument precludes inductive logic, but only when the logic makes no use "of additional assumptions: in particular, about what is to be assigned positive prior probability".[60] Inductive logic itself is not precluded, especially not when it is a deductively valid application of Bayes' theorem that is used to evaluate the probabilities of the hypotheses using the observed data and what is assumed about the priors. Gelman and Shalizi mentioned that Bayes' statisticians do not have to disagree with the non-inductivists.[61]


Because statisticians often associate statistical inference with induction, Popper's philosophy is often said to have a hidden form of induction. For example, Mayo wrote "The falsifying hypotheses ... necessitate an evidence-transcending (inductive) statistical inference. This is hugely problematic for Popper".[62] Yet, also according to Mayo, Popper [as a non-inductivist] acknowledged the useful role of statistical inference in the falsification problems: she mentioned that Popper wrote her (in the context of falsification based on evidence) "I regret not studying statistics" and that her thought was then "not as much as I do".[63]

The dictionary definition of falsifiability at Wiktionary