Katana VentraIP

Foundations of mathematics

Foundations of mathematics is the logical and mathematical framework that allows developing mathematics without generating self-contradictory theories, and, in particular, to have reliable concepts of theorems, proofs, algorithms, etc. This may also include the philosophical study of the relation of this framework with the reality.[1]

For the book by Hilbert and Bernays, see Grundlagen der Mathematik.

The term "foundations of mathematics" was not coined before the end of the 19th century. However, there were first established by the ancient Greek philosophers under the name of Aristotle's logic and systematically applied in Euclid's Elements. In short, a mathematical assertion is considered as truth only if it is a theorem that is proved from true premises by means of a sequence of syllogisms (inference rules), the premises being either already proved theorems or self-evident assertions called axioms or postulates.


These foundations seemed to be a definitive achievement until the 17th century, and the introduction of infinitesimal calculus by Isaac Newton and Gottfried Wilhelm Leibniz. This new area of mathematics involved new methods of reasoning and new basic concepts (continuous functions, derivatives, limits) that were not well founded, but have astonishing consequences, such as the fact that one can deduce from Newton's law of gravitation that the orbits of the planets are ellipses.


During the 19th century, several mathematicians worked for elaborating precise definitions for the basic concepts of infinitesimal calculus, including the definitions of natural and real numbers. This led, near the end of the 19th century, to a series of paradoxical mathematical results that challenged the general confidence in reliability and truth of mathematical results. This has been called the foundational crisis of mathematics.


The resolution of this crisis involved the rise of a new mathematical discipline called mathematical logic that includes set theory, model theory, proof theory, computability and computational complexity theory, and more recently, several parts of computer science. During the 20th century, the discoveries done in this area stabilized the foundations of mathematics into a coherent framework valid for all mathematics, that is based on ZFC, the ZermeloFraenkel set theory with the axiom of choice, and on a systematic use of axiomatic method.


It results from this that the basic mathematical concepts, such as numbers, points, lines, and geometrical spaces are no more defined as abstractions from reality; they are defined by their basic properties (axioms) only. Their adequation with their physical origin does not belong to mathematics anymore, although their relation with the physical reality, is still used by mathematicians for the choice of the axioms, to find which theorems are interesting to prove, and to get indications on possible proofs; in short the relation with reality is used for guiding mathematical intuition.

Before infinitesimal calculus[edit]

During Middle Ages, Euclid's Elements stood as a perfectly solid foundation for mathematics, and philosophy of mathematics concentrated on the ontological status of mathematical concepts; the question was whether they exist independently of perception (realism) or within the mind only (conceptualism); or even whether they are simply names of collection of individual objects (nominalism).


In Elements, the only numbers that are considered are natural numbers and ratios of lengths. This geometrical view of non-integer numbers remained dominant until the end of Middle Ages, although the rise of algebra led to consider them independently from geometry, which implies implicitly that there are foundational primitives of mathematics. For example, the transformations of equations introduced by Al-Khwarizmi and the cubic and quartic formulas discovered in the 16th century result from algebraic manipulations that have no geometric counterpart.


Nevertheless, this did not challenge the classical foundations of mathematics since all properties of numbers that were used can be deduced from their geometrical definition.


In 1637, René Descartes published La Géométrie, in which he showed that geometry can be reduced to algebra by means coordinates, which are numbers determining the position of a point. This gives to the numbers that he called real numbers a more foundational role (before him, numbers were defined as the ratio of two lengths). Descartes' book became famous after 1649 and paved the way to infinitesimal calculus.

Infinitesimal calculus[edit]

Isaac Newton (1642–1727) in England and Leibniz (1646–1716) in Germany independently developed the infinitesimal calculus for dealing with mobile points (such as planets in the sky) and variable quantities.


This needed the introduction of new concepts such as continuous functions, derivatives and limits. For dealing with these concepts in a logical way, they were defined in terms of infinitesimals that are hypothetical numbers that are infinitely close to zero. The strong implications of infinitesimal calculus on foundations of mathematics is illustrated by a pamphlet of the Protestant philosopher George Berkeley (1685–1753), who wrote "[Infinitesimals] are neither finite quantities, nor quantities infinitely small, nor yet nothing. May we not call them the ghosts of departed quantities?".[3]


Also, a lack of rigor has been frequently invoked, because infinitesimals and the associated concepts were not formally defined (lines and planes were not formally defined either, but people were more accustomed to them). Real numbers, continuous functions, derivatives were not formally defined before the 19th century, as well as Euclidean geometry. It is only in the 20th century that a formal definition of infinitesimals has been given, with the proof that the whole infinitesimal can be deduced from them.


Despite its lack of firm logical foundations, infinitesimal calculus was quickly adopted by mathematicians, and validated by its numerous applications; in particular the fact that the planet trajectories can be deduced from the Newton's law of gravitation.

1920: corrected Leopold Löwenheim's proof of what is now called the downward Löwenheim–Skolem theorem, leading to Skolem's paradox discussed in 1922, namely the existence of countable models of ZF, making infinite cardinalities a relative property.

Thoralf Skolem

1922: Proof by that the axiom of choice cannot be proved from the axioms of Zermelo set theory with urelements.

Abraham Fraenkel

1931: Publication of , showing that essential aspects of Hilbert's program could not be attained. It showed how to construct, for any sufficiently powerful and consistent recursively axiomatizable system – such as necessary to axiomatize the elementary theory of arithmetic on the (infinite) set of natural numbers – a statement that formally expresses its own unprovability, which he then proved equivalent to the claim of consistency of the theory; so that (assuming the consistency as true), the system is not powerful enough for proving its own consistency, let alone that a simpler system could do the job. It thus became clear that the notion of mathematical truth cannot be completely determined and reduced to a purely formal system as envisaged in Hilbert's program. This dealt a final blow to the heart of Hilbert's program, the hope that consistency could be established by finitistic means (it was never made clear exactly what axioms were the "finitistic" ones, but whatever axiomatic system was being referred to, it was a 'weaker' system than the system whose consistency it was supposed to prove).

Gödel's incompleteness theorems

1936: proved his truth undefinability theorem.

Alfred Tarski

1936: proved that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist.

Alan Turing

1938: Gödel proved the .

consistency of the axiom of choice and of the generalized continuum hypothesis

1936–1937: and Alan Turing, respectively, published independent papers showing that a general solution to the Entscheidungsproblem is impossible: the universal validity of statements in first-order logic is not decidable (it is only semi-decidable as given by the completeness theorem).

Alonzo Church

1955: showed that there exists a finitely presented group G such that the word problem for G is undecidable.

Pyotr Novikov

1963: showed that the Continuum Hypothesis is unprovable from ZFC. Cohen's proof developed the method of forcing, which is now an important tool for establishing independence results in set theory.

Paul Cohen

1964: Inspired by the fundamental randomness in physics, starts publishing results on algorithmic information theory (measuring incompleteness and randomness in mathematics).[15]

Gregory Chaitin

1966: Paul Cohen showed that the axiom of choice is unprovable in ZF even without .

urelements

1970: is proven unsolvable: there is no recursive solution to decide whether a Diophantine equation (multivariable polynomial equation) has a solution in integers.

Hilbert's tenth problem

1971: is proven to be independent from ZFC.

Suslin's problem

Toward resolution of the crisis[edit]

Starting in 1935, the Bourbaki group of French mathematicians started publishing a series of books to formalize many areas of mathematics on the new foundation of set theory.


The intuitionistic school did not attract many adherents, and it was not until Bishop's work in 1967 that constructive mathematics was placed on a sounder footing.[16]


One may consider that Hilbert's program has been partially completed, so that the crisis is essentially resolved, satisfying ourselves with lower requirements than Hilbert's original ambitions. His ambitions were expressed in a time when nothing was clear: it was not clear whether mathematics could have a rigorous foundation at all.


There are many possible variants of set theory, which differ in consistency strength, where stronger versions (postulating higher types of infinities) contain formal proofs of the consistency of weaker versions, but none contains a formal proof of its own consistency. Thus the only thing we do not have is a formal proof of consistency of whatever version of set theory we may prefer, such as ZF.


In practice, most mathematicians either do not work from axiomatic systems, or if they do, do not doubt the consistency of ZFC, generally their preferred axiomatic system. In most of mathematics as it is practiced, the incompleteness and paradoxes of the underlying formal theories never played a role anyway, and in those branches in which they do or whose formalization attempts would run the risk of forming inconsistent theories (such as logic and category theory), they may be treated carefully.


The development of category theory in the middle of the 20th century showed the usefulness of set theories guaranteeing the existence of larger classes than does ZFC, such as Von Neumann–Bernays–Gödel set theory or Tarski–Grothendieck set theory, albeit that in very many cases the use of large cardinal axioms or Grothendieck universes is formally eliminable.


One goal of the reverse mathematics program is to identify whether there are areas of "core mathematics" in which foundational issues may again provoke a crisis.

Aristotelian realist philosophy of mathematics

Mathematical logic

Brouwer–Hilbert controversy

Church–Turing thesis

Controversy over Cantor's theory

Epistemology

Euclid's Elements

Hilbert's problems

Implementation of mathematics in set theory

Liar paradox

New Foundations

Philosophy of mathematics

Principia Mathematica

Quasi-empiricism in mathematics

Mathematical thought of Charles Peirce

(2003) Number theory and elementary arithmetic, Philosophia Mathematica Vol. 11, pp. 257–284

Avigad, Jeremy

(1990), Foundations and Fundamental Concepts of Mathematics Third Edition, Dover Publications, INC, Mineola NY, ISBN 0-486-69609-X (pbk.) cf §9.5 Philosophies of Mathematics pp. 266–271. Eves lists the three with short descriptions prefaced by a brief introduction.

Eves, Howard

Goodman, N.D. (1979), "", in Tymoczko (ed., 1986).

Mathematics as an Objective Science

(ed., 1996), The Philosophy of Mathematics, Oxford University Press, Oxford, UK.

Hart, W.D.

(1979), "Some Proposals for Reviving the Philosophy of Mathematics", in (Tymoczko 1986).

Hersh, R.

(1922), "Neubegründung der Mathematik. Erste Mitteilung", Hamburger Mathematische Seminarabhandlungen 1, 157–177. Translated, "The New Grounding of Mathematics. First Report", in (Mancosu 1998).

Hilbert, D.

Katz, Robert (1964), Axiomatic Analysis, D. C. Heath and Company.

(1991) [1952]. Introduction to Meta-Mathematics (Tenth impression 1991 ed.). Amsterdam NY: North-Holland Pub. Co. ISBN 0-7204-2103-9.

Kleene, Stephen C.

Media related to Foundations of mathematics at Wikimedia Commons

. Internet Encyclopedia of Philosophy.

"Philosophy of mathematics"

Logic and Mathematics

May 31, 2000, 8 pages.

Foundations of Mathematics: past, present, and future

by Gregory Chaitin.

A Century of Controversy over the Foundations of Mathematics