|Part of a series on|
Logic is the study of correct reasoning or good arguments. It is often defined in a more narrow sense as the science of deductively valid inferences or of logical truths. In this sense, it is equivalent to formal logic and constitutes a formal science investigating how conclusions follow from premises in a topic-neutral way or which propositions are true only in virtue of the logical vocabulary they contain. When used as a countable noun, the term "a logic" refers to a logical formal system. Formal logic contrasts with informal logic, which is also part of logic when understood in the widest sense. There is no general agreement on how the two are to be distinguished. One prominent approach associates their difference with the study of arguments expressed in formal or informal languages. Another characterizes informal logic as the study of ampliative inferences, in contrast to the deductive inferences studied by formal logic. But it is also common to link their difference to the distinction between formal and informal fallacies.
Logic is based on various fundamental concepts. It studies arguments, which are made up of a set of premises together with a conclusion. Premises and conclusions are usually understood either as sentences or as propositions and are characterized by their internal structure. Complex propositions are made up of other propositions linked to each other by propositional connectives. Simple propositions have subpropositional parts, like singular terms and predicates. In either case, the truth of a proposition usually depends on the denotations of its constituents. Logically true propositions constitute a special case since their truth depends only on the logical vocabulary used in them.
The arguments or inferences made up of these propositions can be either correct or incorrect. An argument is correct if its premises support its conclusion. The strongest form of support is found in deductive arguments: it is impossible for their premises to be true and their conclusion to be false. This is the case if they follow a rule of inference, which ensures the truth of the conclusion if the premises are true. A consequence of this is that deductive arguments cannot arrive at any substantive new information not already found in their premises. They contrast in this respect with ampliative arguments, which may provide genuinely new information. This comes with an important drawback: it is possible for all their premises to be true while their conclusion is still false. Many arguments found in everyday discourse and the sciences are ampliative arguments. They are sometimes divided into inductive and abductive arguments. Inductive arguments usually take the form of statistical generalizations while abductive arguments are inferences to the best explanation. Arguments that fall short of the standards of correct reasoning are called fallacies. For formal fallacies, the source of the error is found in the form of the argument while informal fallacies usually contain errors on the level of the content or the context. Besides the definitory rules of logic, which determine whether an argument is correct or not, there are also strategic rules, which describe how a chain of correct arguments can be used to arrive at one's intended conclusion. In formal logic, formal systems are often used to give a precise definition of correct reasoning using a formal language.
Systems of logic are theoretical frameworks for assessing the correctness of reasoning and arguments. Aristotelian logic focuses on reasoning in the form of syllogisms. Its traditional dominance was replaced by classical logic in the modern era. Classical logic is "classical" in the sense that it is based on various fundamental logical intuitions shared by most logicians. It consists of propositional logic and first-order logic. Propositional logic ignores the internal structure of simple propositions and only considers the logical relations on the level of propositions. First-order logic, on the other hand, articulates this internal structure using various linguistic devices, such as predicates and quantifiers. Extended logics accept the basic intuitions behind classical logic and extend it to other fields, such as metaphysics, ethics, and epistemology. This happens usually by introducing new logical symbols, such as modal operators. Deviant logics, on the other hand, reject certain classical intuitions and provide alternative accounts of the fundamental laws of logic. While most systems of logic belong to formal logic, some systems of informal logic have also been proposed. One prominent approach understands reasoning as a dialogical game of persuasion while another focuses on the epistemic role of arguments. Logic is studied in and applied to various fields, such as philosophy, mathematics, computer science, and linguistics. Logic has been studied since Antiquity, early approaches including Aristotelian logic, Stoic logic, Anviksiki, and the mohists. Modern formal logic has its roots in the work of late 19th-century mathematicians such as Gottlob Frege.
The word "logic" originates from the Greek word "logos", which has a variety of translations, such as reason, discourse, or language. Logic is traditionally defined as the study of the laws of thought or correct reasoning. This is usually understood in terms of inferences or arguments: reasoning may be seen as the activity of drawing inferences, whose outward expression is given in arguments. An inference or an argument is a set of premises together with a conclusion. Logic is interested in whether arguments are good or inferences are valid, i.e. whether the premises support their conclusions.
These general characterizations apply to logic in the widest sense since they are true both for formal and informal logic. But many definitions of logic focus on formal logic because it is the paradigmatic form of logic. In this narrower sense, logic is a formal science that studies how conclusions follow from premises in a topic-neutral way. As a formal science, it contrasts with empirical sciences, like physics or biology, because it tries to characterize the inferential relations between premises and conclusions only based on how they are structured. This means that the actual content of these propositions, i.e. their specific topic, is not important for whether the inference is valid or not. This can be expressed by distinguishing between logical and non-logical vocabulary: inferences are valid because of the logical terms used in them, independent of the meanings of the non-logical terms. Valid inferences are characterized by the fact that the truth of their premises ensures the truth of their conclusion. This means that it is impossible for the premises to be true and the conclusion to be false. The general logical structures characterizing valid inferences are called rules of inference. In this sense, logic is often defined as the study of valid inference. This contrasts with another prominent characterization of logic as the science of logical truths. A proposition is logically true if its truth depends only on the logical vocabulary used in it. This means that it is true in all possible worlds and under all interpretations of its non-logical terms. These two characterizations of logic are closely related to each other: an inference is valid if the material conditional from its premises to its conclusion is logically true.
The term "logic" can also be used in a slightly different sense as a countable noun. In this sense, a logic is a logical formal system. Different logics differ from each other concerning the formal languages used to express them and, most importantly, concerning the rules of inference they accept as valid. Starting in the 20th century, many new formal systems have been proposed. There is an ongoing debate about which of these systems should be considered logics in the strict sense and which should be considered non-logical formal systems. Suggested criteria for this distinction include logical completeness and proximity to the intuitions governing classical logic. According to these criteria, it has been argued, for example, that higher-order logics and fuzzy logic should not be considered logics when understood in a strict sense.
When understood in the widest sense, logic encompasses both formal and informal logic. Formal logic is the traditionally dominant field. Various problems in applying its insights to actual everyday arguments have prompted modern developments of informal logic. They often stress its significance for various practical purposes which formal logic on its own is unable to address. Both have in common that they aim to provide criteria for assessing the correctness of arguments and distinguishing them from fallacies. Various suggestions have been made concerning how to draw the distinction between the two but there is no universally accepted answer. These difficulties often coincide with the wide disagreements about how informal logic is to be defined.
The most literal approach sees the terms "formal" and "informal" as applying to the language used to express arguments. On this view, formal logic studies arguments expressed in formal languages while informal logic studies arguments expressed in informal or natural languages. This means that the inference from the formulas "" and "" to the conclusion "" is studied by formal logic while the inference from the English sentences "Al lit a cigarette" and "Bill stormed out of the room" to the sentence "Al lit a cigarette and Bill stormed out of the room" belongs to informal logic. Formal languages are characterized by their precision and simplicity. They normally contain a very limited vocabulary and exact rules on how their symbols can be used to construct sentences, usually referred to as well-formed formulas. This simplicity and exactness in turn make it possible for formal logic to formulate precise rules of inference that determine whether a given argument is valid. This approach brings with it the need to translate natural language arguments into the formal language before their validity can be assessed, a procedure that comes with various problems of its own. Informal logic avoids some of these problems by analyzing natural language arguments in their original form without the need of translation. But it faces related problems of its own, associated with the ambiguity, vagueness, and context-dependence of natural language expressions. A closely related approach applies the terms "formal" and "informal" not just to the language used, but more generally to the standards, criteria, and procedures of argumentation.
Another approach draws the distinction according to the different types of inferences analyzed. This perspective understands formal logic as the study of deductive inferences in contrast to informal logic as the study of non-deductive inferences, like inductive or abductive inferences. The characteristic of deductive inferences is that the truth of their premises ensures the truth of their conclusion. This means that if all the premises are true, it is impossible for the conclusion to be false. For this reason, deductive inferences are in a sense trivial or uninteresting since they do not provide the thinker with any new information not already found in the premises. Non-deductive inferences, on the other hand, are ampliative: they help the thinker learn something above and beyond what is already stated in the premises. They achieve this at the cost of certainty: even if all premises are true, the conclusion of an ampliative argument may still be false.
One more approach tries to link the difference between formal and informal logic to the distinction between formal and informal fallacies. This distinction is often drawn in relation to the form, content, and context of arguments. In the case of formal fallacies, the error is found on the level of the argument's form, whereas for informal fallacies, the content and context of the argument are responsible. This is connected to the idea that formal logic abstracts away from the argument's content and is only interested in its form, specifically whether it follows a valid rule of inference. It also concerns the idea that it's not important for the validity of a formal argument whether its premises are true or false. Informal logic, on the other hand, also takes the content and context of an argument into consideration. A false dilemma, for example, involves an error of content by excluding viable options, as in "you are either with us or against us; you are not with us; therefore, you are against us". For the strawman fallacy, on the other hand, the error is found on the level of context: a weak position is first described and then defeated, even though the opponent does not hold this position. But in another context, against an opponent that actually defends the strawman position, the argument is correct.
Other accounts draw the distinction based on investigating general forms of arguments in contrast to particular instances, on the study of logical constants instead of substantive concepts, on the discussion of logical topics with or without formal devices, or on the role of epistemology for the assessment of arguments.
Premises and conclusions are the basic parts of inferences or arguments and therefore play a central role in logic. In the case of a valid inference or a correct argument, the conclusion follows from the premises or the premises support the conclusion. For instance, the premises "Mars is red" and "Mars is a planet" support the conclusion "Mars is a red planet". It is generally accepted that premises and conclusions have to be truth-bearers.[i] This means that they have a truth value, that they are either true or false. Thus contemporary philosophy generally sees them either as propositions or as sentences. Propositions are the denotations of sentences and are usually understood as abstract objects.
Propositional theories of premises and conclusions are often criticized because of the difficulties involved in specifying the identity criteria of abstract objects or because of naturalist considerations. These objections are avoided by seeing premises and conclusions not as propositions but as sentences, i.e. as concrete linguistic objects like the symbols displayed on the reader's computer screen. But this approach comes with new problems of its own: sentences are often context-dependent and ambiguous, meaning that whether an argument is valid would not only depend on its parts but also on its context and on how it is interpreted.
In earlier work, premises and conclusions were understood in psychological terms as thoughts or judgments, an approach known as "psychologism". This position was heavily criticized around the turn of the 20th century.
A central aspect of premises and conclusions for logic, independent of how their nature is conceived, concerns their internal structure. As propositions or sentences, they can be either simple or complex. A complex proposition has other propositions as its constituents, which are linked to each other through propositional connectives like "and" or "if-then". Simple propositions, on the other hand, do not have propositional parts. But they can also be conceived as having an internal structure: they are made up of subpropositional parts, like singular terms and predicates. For example, the simple proposition "Mars is red" can be formed by applying the predicate "red" to the singular term "Mars". In contrast, the complex proposition "Mars is red and Venus is white" is made up of two simple propositions connected by the propositional connective "and".
Whether a proposition is true depends, at least in part, on its constituents. For complex propositions formed using truth-functional propositional connectives, their truth only depends on the truth-values of their parts. But this relation is more complicated in the case of simple propositions and their subpropositional parts. These subpropositional parts have meanings of their own, like referring to objects or classes of objects. Whether the simple proposition they form is true depends on their relation to reality, i.e. what the objects they refer to are like. This topic is studied by theories of reference.
In some cases, a simple or a complex proposition is true independently of the substantive meanings of its parts. For example, the complex proposition "if Mars is red, then Mars is red" is true independent of whether its parts, i.e. the simple proposition "Mars is red", are true or false. In such cases, the truth is called a logical truth: a proposition is logically true if its truth depends only on the logical vocabulary used in it. This means that it is true under all interpretations of its non-logical terms. In some modal logics, this notion can be understood equivalently as truth at all possible worlds. Logical truth plays an important role in logic and some theorists even define logic as the study of logical truths.
Logic is commonly defined in terms of arguments or inferences as the study of their correctness. An argument is a set of premises together with a conclusion. An inference is the process of reasoning from these premises to the conclusion. But these terms are often used interchangeably in logic. Sometimes a distinction is made between simple and complex arguments. A complex argument is made up of a chain of simple arguments. These simple arguments constitute a chain because the conclusions of the earlier arguments are used as premises in the later arguments. For a complex argument to be successful, each link of the chain has to be successful.
A central aspect of arguments and inferences is that they are correct or incorrect. If they are correct then their premises support their conclusion. In the incorrect case, this support is missing. It can take different forms corresponding to the different types of reasoning. The strongest form of support corresponds to deductive reasoning. But even arguments that are not deductively valid may still constitute good arguments because their premises offer non-deductive support to their conclusions. For such cases, the term ampliative or inductive reasoning is used. Deductive arguments are associated with formal logic in contrast to the relation between ampliative arguments and informal logic.
A deductively valid argument is one whose premises guarantee the truth of its conclusion. For instance, the argument "Victoria is tall; Victoria has brown hair; therefore Victoria is tall and has brown hair" is deductively valid. Alfred Tarski holds that deductive arguments have three essential features: (1) they are formal, i.e. they depend only on the form of the premises and the conclusion; (2) they are a priori, i.e. no sense experience is needed to determine whether they obtain; (3) they are modal, i.e. that they hold by logical necessity for the given propositions, independent of any other circumstances.
Because of the first feature, the focus on formality, deductive inference is usually identified with rules of inference. Rules of inference specify how the premises and the conclusion have to be structured for the inference to be valid. Arguments that do not follow any rule of inference are deductively invalid. The modus ponens is a prominent rule of inference. It has the form "if A, then B; A; therefore B".
The third feature can be expressed by stating that deductively valid inferences are truth-preserving: it is impossible for the premises to be true and the conclusion to be false. Because of this feature, it is often asserted that deductive inferences are uninformative since the conclusion cannot arrive at new information not already present in the premises. But this point is not always accepted since it would mean, for example, that most of mathematics is uninformative. A different characterization distinguishes between surface and depth information. On this view, deductive inferences are uninformative on the depth level but can be highly informative on the surface level, as may be the case for various mathematical proofs.
Ampliative inferences, on the other hand, are informative even on the depth level. They are more interesting in this sense since the thinker may acquire substantive information from them and thereby learn something genuinely new. But this feature comes with a certain cost: the premises support the conclusion in the sense that they make its truth more likely but they do not ensure its truth. This means that the conclusion of an ampliative argument may be false even though all its premises are true. This characteristic is closely related to non-monotonicity and defeasibility: it may be necessary to retract an earlier conclusion upon receiving new information or in the light of new inferences drawn. Ampliative reasoning is of central importance since a lot of the arguments found in everyday discourse and the sciences are ampliative. Ampliative arguments are not automatically incorrect. Instead, they just follow different standards of correctness. An important aspect of most ampliative arguments is that the support they provide for their conclusion comes in degrees. In this sense, the line between correct and incorrect arguments is blurry in some cases, as when the premises offer weak but non-negligible support. This contrasts with deductive arguments, which are either valid or invalid with nothing in-between.
The terminology used to categorize ampliative arguments is inconsistent. Some authors use the term "induction" to cover all forms of non-deductive arguments. But in a more narrow sense, induction is only one type of ampliative argument besides abductive arguments. Some authors also allow conductive arguments as one more type. In this narrow sense, induction is often defined as a form of statistical generalization. In this case, the premises of an inductive argument are many individual observations that all show a certain pattern. The conclusion then is a general law that this pattern always obtains. In this sense, one may infer that "all elephants are gray" based on one's past observations of the color of elephants. A closely related form of inductive inference has as its conclusion not a general law but one more specific instance, as when it is inferred that an elephant one has not seen yet is also gray. Some theorists stipulate that inductive inferences rest only on statistical considerations in order to distinguish them from abductive inference.
Abductive inference may or may not take statistical observations into consideration. In either case, the premises offer support for the conclusion because the conclusion is the best explanation of why the premises obtain.[ii] In this sense, abduction is also called the inference to the best explanation. For example, given the premise that there is a plate with breadcrumbs in the kitchen in the early morning, one may infer the conclusion that one's house-mate had a midnight snack and was too tired to clean the table. This conclusion is justified because it is the best explanation of the current state of the kitchen. For abduction, it is not sufficient that the conclusion explains the premises. For example, the conclusion that a burglar broke into the house last night, got hungry on the job, and had a midnight snack, would also explain the state of the kitchen. But this conclusion is not justified because it is not the best or most likely explanation.
Not all arguments live up to the standards of correct reasoning. When they do not, they are usually referred to as fallacies. Their central aspect is not that their conclusion is false but that there is some flaw with the reasoning leading to this conclusion. So the argument "it is sunny today; therefore spiders have eight legs" is fallacious even though the conclusion is true. Some theorists give a more restrictive definition of fallacies by additionally requiring that they appear to be correct. This way, genuine fallacies can be distinguished from mere mistakes of reasoning due to carelessness. This explains why people tend to commit fallacies: because they have an alluring element that seduces people into committing and accepting them. However, this reference to appearances is controversial because it belongs to the field of psychology, not logic, and because appearances may be different for different people.
Fallacies are usually divided into formal and informal fallacies. For formal fallacies, the source of the error is found in the form of the argument. For example, denying the antecedent is one type of formal fallacy, as in "if Othello is a bachelor, then he is male; Othello is not a bachelor; therefore Othello is not male". But most fallacies fall into the category of informal fallacies, of which a great variety is discussed in the academic literature. The source of their error is usually found in the content or the context of the argument. Informal fallacies are sometimes categorized as fallacies of ambiguity, fallacies of presumption, or fallacies of relevance. For fallacies of ambiguity, the ambiguity and vagueness of natural language are responsible for their flaw, as in "feathers are light; what is light cannot be dark; therefore feathers cannot be dark". Fallacies of presumption have a wrong or unjustified premise but may be valid otherwise. In the case of fallacies of relevance, the premises do not support the conclusion because they are not relevant to it.
The main focus of most logicians is to investigate the criteria according to which an argument is correct or incorrect. A fallacy is committed if these criteria are violated. In the case of formal logic, they are known as rules of inference. They constitute definitory rules, which determine whether a certain logical move is correct or which moves are allowed. Definitory rules contrast with strategic rules. Strategic rules specify which inferential moves are necessary in order to reach a given conclusion based on a certain set of premises. This distinction does not just apply to logic but also to various games as well. In chess, for example, the definitory rules dictate that bishops may only move diagonally while the strategic rules describe how the allowed moves may be used to win a game, for example, by controlling the center and by defending one's king. A third type of rules concerns empirical descriptive rules. They belong to the field of psychology and generalize how people actually draw inferences. It has been argued that logicians should give more emphasis to strategic rules since they are highly relevant for effective reasoning.
A formal system of logic consists of a language, a proof system, and a semantics.  A system's language and proof system are sometimes grouped together as the system's syntax, since they both concern the form rather than the content of the system's expressions.
The term "a logic" is often used a countable noun to refer to a particular formal system of logic. Different logics can differ from each other in their language, proof system, or their semantics. Starting in the 20th century, many new formal systems have been proposed.[iii]
A language is a set of well formed formulas. For instance, in propositional logic, is a formula but is not. Languages are typically defined by providing an alphabet of basic expressions and recursive syntactic rules which build them into formulas.
A proof system is a collection of formal rules which define when a conclusion follows from given premises. For instance, the classical rule of conjunction introduction states that follows from the premises and . Rules in a proof systems are always defined in terms of formulas' syntactic form, never in terms of their meanings. Such rules can be applied sequentially, giving a mechanical procedure for generating conclusions from premises. There are a number of different types of proof systems including natural deduction and sequent calculi. Proof systems are closely linked to philosophical work which characterizes logic as the study of valid inference.
A semantics is a system for mapping expressions of a formal language to their denotations. In many systems of logic, denotations are truth values. For instance, the semantics for classical propositional logic assigns the formula the denotation "true" whenever and are true. Entailment is a semantic relation which holds between formulas when the first cannot be true without the second being true as well. Semantics is closely tied to the philosophical characterization of logic as the study of logical truth.
A system of logic is sound when its proof system cannot derive a conclusion from a set of premises unless it is semantically entailed by them. In other words, its proof system cannot lead to false conclusions, as defined by the semantics. A system is complete when its proof system can derive every conclusion that is semantically entailed by its premises. In other words, its proof system can lead to any true conclusion, as defined by the semantics. Thus, soundness and completeness together describe a system whose notions of validity and entailment line up perfectly.
Systems of logic are theoretical frameworks for assessing the correctness of reasoning and arguments. For over two thousand years, Aristotelian logic was treated as the cannon of logic. But modern developments in this field have led to a vast proliferation of logical systems. One prominent categorization divides modern formal logical systems into classical logic, extended logics, and deviant logics. Classical logic is to be distinguished from traditional or Aristotelian logic. It encompasses propositional logic and first-order logic. It is "classical" in the sense that it is based on various fundamental logical intuitions shared by most logicians. These intuitions include the law of excluded middle, the double negation elimination, the principle of explosion, and the bivalence of truth. It was originally developed to analyze mathematical arguments and was only later applied to other fields as well. Because of this focus on mathematics, it does not include logical vocabulary relevant to many other topics of philosophical importance, like the distinction between necessity and possibility, the problem of ethical obligation and permission, or the relations between past, present, and future. Such issues are addressed by extended logics. They build on the fundamental intuitions of classical logic and expand it by introducing new logical vocabulary. This way, the exact logical approach is applied to fields like ethics or epistemology that lie beyond the scope of mathematics.
Deviant logics, on the other hand, reject some of the fundamental intuitions of classical logic. Because of this, they are usually seen not as its supplements but as its rivals. Deviant logical systems differ from each other either because they reject different classical intuitions or because they propose different alternatives to the same issue.
Informal logic is usually done in a less systematic way. It often focuses on more specific issues, like investigating a particular type of fallacy or studying a certain aspect of argumentation. Nonetheless, some systems of informal logic have also been presented that try to provide a systematic characterization of the correctness of arguments.
When understood in the widest sense, Aristotelian logic encompasses a great variety of topics, including metaphysical theses about ontological categories and problems of scientific explanation. But in a more narrow sense, it refers to term logic or syllogistics. A syllogism is a certain form of argument involving three propositions: two premises and a conclusion. Each proposition has three essential parts: a subject, a predicate, and a copula connecting the subject to the predicate. For example, the proposition "Socrates is wise" is made up of the subject "Socrates", the predicate "wise", and the copula "is". The subject and the predicate are the terms of the proposition. In this sense, Aristotelian logic does not contain complex propositions made up of various simple propositions. It differs in this aspect from propositional logic, in which any two propositions can be linked using a logical connective like "and" to form a new complex proposition.
Aristotelian logic differs from predicate logic in that the subject is either universal, particular, indefinite, or singular. For example, the term "all humans" is a universal subject in the proposition "all humans are mortal". A similar proposition could be formed by replacing it with the particular term "some humans", the indefinite term "a human", or the singular term "Socrates". In predicate logic, on the other hand, universal and particular propositions would be expressed by using a quantifier and two predicates. Another important difference is that Aristotelian logic only includes predicates for simple properties of entities, but lacks predicates corresponding to relations between entities. The predicate can be linked to the subject in two ways: either by affirming it or by denying it. For example, the proposition "Socrates is not a cat" involves the denial of the predicate "cat" to the subject "Socrates". Using different combinations of subjects and predicates, a great variety of propositions and syllogisms can be formed. Syllogisms are characterized by the fact that the premises are linked to each other and to the conclusion by sharing one predicate in each case. Thus, these three propositions contain three predicates, referred to as major term, minor term, and middle term. The central aspect of Aristotelian logic involves classifying all possible syllogisms into valid and invalid arguments according to how the propositions are formed. For example, the syllogism "all men are mortal; Socrates is a man; therefore Socrates is mortal" is valid. The syllogism "all cats are mortal; Socrates is mortal; therefore Socrates is a cat", on the other hand, is invalid.
Propositional logic comprises formal systems in which formulae are built from atomic propositions using logical connectives. For instance, propositional logic represents the conjunction of two atomic propositions and as the complex formula . Unlike predicate logic where terms and predicates are the smallest units, propositional logic takes full propositions with truth values as its most basic component. Thus, propositional logics can only represent logical relationships that arise from the way complex propositions are built from simpler ones; it cannot represent inferences that results from the inner structure of a proposition.
First-order logic provides an account of quantifiers general enough to express a wide set of arguments occurring in natural language. For example, Bertrand Russell's famous barber paradox, "there is a man who shaves all and only men who do not shave themselves" can be formalised by the sentence , using the non-logical predicate to indicate that x is a man, and the non-logical relation to indicate that x shaves y; all other symbols of the formulae are logical, expressing the universal and existential quantifiers, conjunction, implication, negation and biconditional.
The development of first-order logic is usually attributed to Gottlob Frege, who is also credited as one of the founders of analytic philosophy, but the formulation of first-order logic most often used today is found in Principles of Mathematical Logic by David Hilbert and Wilhelm Ackermann in 1928. The analytical generality of first-order logic allowed the formalization of mathematics, drove the investigation of set theory, and allowed the development of Alfred Tarski's approach to model theory. It provides the foundation of modern mathematical logic.
Many extended logics take the form of modal logic by introducing modal operators. Modal logic were originally developed to represent statements about necessity and possibility. For instance the modal formula can be read as "possibly " while can be read as "necessarily ". Modal logics can be used to represent different phenomena depending on what flavor of necessity and possibility is under consideration. When is used to represent epistemic necessity, states that is known. When is used to represent deontic necessity, states that is a moral or legal obligation. Within philosophy, modal logics are widely used in formal epistemology, formal ethics, and metaphysics. Within linguistic semantics, systems based on modal logic are used to analyze linguistic modality in natural languages. Other fields such computer science and set theory have applied the relational semantics for modal logic beyond its original conceptual motivation, using it to provide insight into patterns including the set-theoretic multiverse and transition systems in computation.
Higher-order logics extend classical logic not by using modal operators but by introducing new forms of quantification. Quantifiers correspond to terms like "all" or "some". In classical first-order logic, quantifiers are only applied to individuals. The formula "" (some apples are sweet) is an example of the existential quantifier "" applied to the individual variable "". In higher-order logics, quantification is also allowed over predicates. This increases its expressive power. For example, to express the idea that Mary and John share some qualities, one could use the formula "". In this case, the existential quantifier is applied to the predicate variable "". The added expressive power is especially useful for mathematics since it allows for more succinct formulations of mathematical theories. But it has various drawbacks in regard to its meta-logical properties and ontological implications, which is why first-order logic is still much more widely used.
A great variety of deviant logics have been proposed. One major paradigm is intuitionistic logic, which rejects the law of the excluded middle. Intuitionism was developed by the Dutch mathematicians L.E.J. Brouwer and Arend Heyting to underpin their constructive approach to mathematics, in which the existence of a mathematical object can only be proven by constructing it. Intuitionism was further pursued by Gerhard Gentzen, Kurt Gödel, Michael Dummett, among others. Intuitionistic logic is of great interest to computer scientists, as it is a constructive logic and sees many applications, such as extracting verified programs from proofs and influencing the design of programming languages through the formulae-as-types correspondence. It is closely related to nonclassical systems such as Gödel–Dummett logic and inquisitive logic.
Multi-valued logics depart from classicality by rejecting the principle of bivalence which requires all propositions to be either true or false. For instance, Jan Łukasiewicz and Stephen Cole Kleene both proposed ternary logics which have a third truth value representing that a statement's truth value is indeterminate. These logics have seen applications including to presupposition in linguistics. Fuzzy logics are multivalued logics that have an infinite number of "degrees of truth", represented by a real number between 0 and 1.
The pragmatic or dialogical approach to informal logic sees arguments as speech acts and not merely as a set of premises together with a conclusion. As speech acts, they occur in a certain context, like a dialogue, which affects the standards of right and wrong arguments. A prominent version by Douglas N. Walton understands a dialogue as a game between two players. The initial position of each player is characterized by the propositions to which they are committed and the conclusion they intend to prove. Dialogues are games of persuasion: each player has the goal of convincing the opponent of their own conclusion. This is achieved by making arguments: arguments are the moves of the game. They affect to which propositions the players are committed. A winning move is a successful argument that takes the opponent's commitments as premises and shows how one's own conclusion follows from them. This is usually not possible straight away. For this reason, it is normally necessary to formulate a sequence of arguments as intermediary steps, each of which brings the opponent a little closer to one's intended conclusion. Besides these positive arguments leading one closer to victory, there are also negative arguments preventing the opponent's victory by denying their conclusion. Whether an argument is correct depends on whether it promotes the progress of the dialogue. Fallacies, on the other hand, are violations of the standards of proper argumentative rules. These standards also depend on the type of dialogue: in the context of science, the dialogue rules are different from the rules in the context of negotiation.
The epistemic approach to informal logic, on the other hand, focuses on the epistemic role of arguments. It is based on the idea that arguments aim to increase our knowledge. They achieve this by linking justified beliefs to beliefs that are not yet justified. Correct arguments succeed at expanding knowledge while fallacies are epistemic failures: they do not justify the belief in their conclusion. In this sense, logical normativity consists in epistemic success or rationality. For example, the fallacy of begging the question is a fallacy because it fails to provide independent justification for its conclusion, even though it is deductively valid. The Bayesian approach is one example of an epistemic approach. Central to Bayesianism is not just whether the agent believes something but the degree to which they believe it, the so-called credence. Degrees of belief are understood as subjective probabilities in the believed proposition, i.e. as how certain the agent is that the proposition is true. On this view, reasoning can be interpreted as a process of changing one's credences, often in reaction to new incoming information. Correct reasoning, and the arguments it is based on, follows the laws of probability, for example, the principle of conditionalization. Bad or irrational reasoning, on the other hand, violates these laws.
Logic is studied in various fields. In many cases, this is done by applying its formal method to specific topics outside its scope, like to ethics or computer science. In other cases, logic itself is made the subject of research in another discipline. This can happen in diverse ways, like by investigating the philosophical presuppositions of fundamental logical concepts, by interpreting and analyzing logic through mathematical structures, or by studying and comparing abstract properties of formal logical systems.
Philosophy of logic is the philosophical discipline studying the scope and nature of logic. It investigates many presuppositions implicit in logic, like how to define its fundamental concepts or the metaphysical assumptions associated with them. It is also concerned with how to classify the different logical systems and considers the ontological commitments they incur. Philosophical logic is one important area within the philosophy of logic. It studies the application of logical methods to philosophical problems in fields like metaphysics, ethics, and epistemology. This application usually happens in the form of extended or deviant logical systems.
Research in mathematical logic commonly addresses the mathematical properties of formal systems of logic. However, it can also include attempts to use logic to analyze mathematical reasoning or to establish logic-based foundations of mathematics. The latter was a major concern in early 20th century mathematical logic, which pursued the program of logicism pioneered by philosopher-logicians such as Gottlob Frege and Bertrand Russell. Mathematical theories were supposed to be logical tautologies, and the programme was to show this by means of a reduction of mathematics to logic. The various attempts to carry this out met with failure, from the crippling of Frege's project in his Grundgesetze by Russell's paradox, to the defeat of Hilbert's program by Gödel's incompleteness theorems.
Set theory originated in the study of the infinite by Georg Cantor, and it has been the source of many of the most challenging and important issues in mathematical logic, from Cantor's theorem, through the status of the Axiom of Choice and the question of the independence of the continuum hypothesis, to the modern debate on large cardinal axioms.
Recursion theory captures the idea of computation in logical and arithmetic terms; its most classical achievements are the undecidability of the Entscheidungsproblem by Alan Turing, and his presentation of the Church–Turing thesis. Today recursion theory is mostly concerned with the more refined problem of complexity classes—when is a problem efficiently solvable?—and the classification of degrees of unsolvability.
In computer science, logic is studied as part of the theory of computation. Key areas of logic that are relevant to computing include computability theory, modal logic, and category theory. Early computer machinery was based on ideas from logic such as the lambda calculus. Computer scientists also apply concepts from logic to problems in computing and vice versa. For instance, modern artificial intelligence builds on logicians' work in argumentation theory, while automated theorem proving can assist logicians in finding and checking proofs. In logic programming languages such as Prolog, a program computes the consequences of logical axioms and rules to answer a query.
Formal semantics is a subfield of both linguistics and philosophy which uses logic to analyze meaning in natural language. It is an empirical field which seeks to characterize the denotations of linguistic expressions and explain how those denotations are composed from the meanings of their parts. The field was developed by Richard Montague and Barbara Partee in the 1970s, and remains an active area of research. Central questions include scope, binding, and linguistic modality.
What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled "Is Logic Empirical?" Hilary Putnam, building on a suggestion of W. V. Quine, argued that in general the facts of propositional logic have a similar epistemological status as facts about the physical universe, for example as the laws of mechanics or of general relativity, and in particular that what physicists have learned about quantum mechanics provides a compelling case for abandoning certain familiar principles of classical logic: if we want to be realists about the physical phenomena described by quantum theory, then we should abandon the principle of distributivity, substituting for classical logic the quantum logic proposed by Garrett Birkhoff and John von Neumann.
Another paper of the same name by Michael Dummett argues that Putnam's desire for realism mandates the law of distributivity. Distributivity of logic is essential for the realist's understanding of how propositions are true of the world in just the same way as he has argued the principle of bivalence is. In this way, the question, "Is Logic Empirical?" can be seen to lead naturally into the fundamental controversy in metaphysics on realism versus anti-realism.
Georg Wilhelm Friedrich Hegel was deeply critical of any simplified notion of the law of non-contradiction. It was based on Gottfried Wilhelm Leibniz's idea that this law of logic also requires a sufficient ground to specify from what point of view (or time) one says that something cannot contradict itself. A building, for example, both moves and does not move; the ground for the first is our solar system and for the second the earth. In Hegelian dialectic, the law of non-contradiction, of identity, itself relies upon difference and so is not independently assertable.
Closely related to questions arising from the paradoxes of implication comes the suggestion that logic ought to tolerate inconsistency. Relevance logic and paraconsistent logic are the most important approaches here, though the concerns are different: a key consequence of classical logic and some of its rivals, such as intuitionistic logic, is that they respect the principle of explosion, which means that the logic collapses if it is capable of deriving a contradiction. Graham Priest, the main proponent of dialetheism, has argued for paraconsistency on the grounds that there are in fact, true contradictions.[clarification needed]
Logic arose from a concern with correctness of argumentation. Modern logicians usually wish to ensure that logic studies just those arguments that arise from appropriately general forms of inference. For example, Thomas Hofweber writes in the Stanford Encyclopedia of Philosophy that logic "does not, however, cover good reasoning as a whole. That is the job of the theory of rationality. Rather it deals with inferences whose validity can be traced back to the formal features of the representations that are involved in that inference, be they linguistic, mental, or other representations."
The idea that logic treats special forms of argument, deductive argument, rather than argument in general, has a history in logic that dates back at least to logicism in mathematics (19th and 20th centuries) and the advent of the influence of mathematical logic on philosophy. A consequence of taking logic to treat special kinds of argument is that it leads to identification of special kinds of truth, the logical truths (with logic equivalently being the study of logical truth), and excludes many of the original objects of study of logic that are treated as informal logic. Robert Brandom has argued against the idea that logic is the study of a special kind of logical truth, arguing that instead one can talk of the logic of material inference (in the terminology of Wilfred Sellars), with logic making explicit the commitments that were originally implicit in informal inference.[page needed]
The philosophical vein of various kinds of skepticism contains many kinds of doubt and rejection of the various bases on which logic rests, such as the idea of logical form, correct inference, or meaning, sometimes leading to the conclusion that there are no logical truths. This is in contrast with the usual views in philosophical skepticism, where logic directs skeptical enquiry to doubt received wisdoms, as in the work of Sextus Empiricus.
Friedrich Nietzsche provides a strong example of the rejection of the usual basis of logic: his radical rejection of idealization led him to reject truth as a "... mobile army of metaphors, metonyms, and anthropomorphisms—in short ... metaphors which are worn out and without sensuous power; coins which have lost their pictures and now matter only as metal, no longer as coins". His rejection of truth did not lead him to reject the idea of either inference or logic completely but rather suggested that "logic [came] into existence in man's head [out] of illogic, whose realm originally must have been immense. Innumerable beings who made inferences in a way different from ours perished". Thus there is the idea that logical inference has a use as a tool for human survival, but that its existence does not support the existence of truth, nor does it have a reality beyond the instrumental: "Logic, too, also rests on assumptions that do not correspond to anything in the real world".
This position held by Nietzsche however, has come under extreme scrutiny for several reasons. Some philosophers, such as Jürgen Habermas, claim his position is self-refuting—and accuse Nietzsche of not even having a coherent perspective, let alone a theory of knowledge. Georg Lukács, in his book The Destruction of Reason, asserts that, "Were we to study Nietzsche's statements in this area from a logico-philosophical angle, we would be confronted by a dizzy chaos of the most lurid assertions, arbitrary and violently incompatible." Bertrand Russell described Nietzsche's irrational claims with "He is fond of expressing himself paradoxically and with a view to shocking conventional readers" in his book A History of Western Philosophy.
Logic was developed independently in several cultures during antiquity. One major early contributor was Aristotle, who developed term logic in his Organon and Prior Analytics. In this approach, judgements are broken down into propositions consisting of two terms that are related by one of a fixed number of relation. Inferences are expressed by means of syllogisms that consist of two propositions sharing a common term as premise, and a conclusion that is a proposition involving the two unrelated terms from the premises. Aristotle's monumental insight was the notion that arguments can be characterized in terms of their form. The later logician Łukasiewicz described this insight as "one of Aristotle's greatest inventions". Aristotle's system of logic was also responsible for the introduction of hypothetical syllogism, temporal modal logic, and inductive logic, as well as influential vocabulary such as terms, predicables, syllogisms and propositions. Aristotelian logic was highly regarded in classical and medieval times, both in Europe and the Middle East. It remained in wide use in the West until the early 19th century. It has now been superseded by later work, though many of its key insights live on in modern systems of logic.
Ibn Sina (Avicenna) (980–1037 CE) was the founder of Avicennian logic, which replaced Aristotelian logic as the dominant system of logic in the Islamic world, and also had an important influence on Western medieval writers such as Albertus Magnus and William of Ockham. Ibn Sina wrote on the hypothetical syllogism and on the propositional calculus. He developed an original "temporally modalized" syllogistic theory, involving temporal logic and modal logic. He also made use of inductive logic, such as the methods of agreement, difference, and concomitant variation which are critical to the scientific method. Fakhr al-Din al-Razi (b. 1149) criticised Aristotle's "first figure" and formulated an early system of inductive logic, foreshadowing the system of inductive logic developed by John Stuart Mill (1806–1873).
In Europe during the later medieval period, major efforts were made to show that Aristotle's ideas were compatible with Christian faith. During the High Middle Ages, logic became a main focus of philosophers, who would engage in critical logical analyses of philosophical arguments, often using variations of the methodology of scholasticism. Initially, medieval Christian scholars drew on the classics that had been preserved in Latin through commentaries by such figures such as Boethius, later the work of Islamic philosophers such as Ibn Sina and Ibn Rushd (Averroes 1126–1198 CE) were drawn on, which expanded the range of ancient works available to medieval Christian scholars since more Greek work was available to Muslim scholars that had been preserved in Latin commentaries. In 1323, William of Ockham's influential Summa Logicae was released. By the 18th century, the structured approach to arguments had degenerated and fallen out of favour, as depicted in Holberg's satirical play Erasmus Montanus. The Chinese logical philosopher Gongsun Long (c. 325–250 BCE) proposed the paradox "One and one cannot become two, since neither becomes two."[iv] In China, the tradition of scholarly investigation into logic, however, was repressed by the Qin dynasty following the legalist philosophy of Han Feizi.
In India, the Anviksiki school of logic was founded by Medhātithi (c. 6th century BCE). Innovations in the scholastic school, called Nyaya, continued from ancient times into the early 18th century with the Navya-Nyāya school. By the 16th century, it developed theories resembling modern logic, such as Gottlob Frege's "distinction between sense and reference of proper names" and his "definition of number", as well as the theory of "restrictive conditions for universals" anticipating some of the developments in modern set theory.[v] Since 1824, Indian logic attracted the attention of many Western scholars, and has had an influence on important 19th-century logicians such as Charles Babbage, Augustus De Morgan, and George Boole. In the 20th century, Western philosophers like Stanislaw Schayer and Klaus Glashoff have explored Indian logic more extensively.
The syllogistic logic developed by Aristotle predominated in the West until the mid-19th century, when interest in the foundations of mathematics stimulated the development of symbolic logic (now called mathematical logic). In 1854, George Boole published The Laws of Thought, introducing symbolic logic and the principles of what is now known as Boolean logic. In 1879, Gottlob Frege published Begriffsschrift, which inaugurated modern logic with the invention of quantifier notation, reconciling the Aristotelian and Stoic logics in a broader system, and solving such problems for which Aristotelian logic was impotent, such as the problem of multiple generality. From 1910 to 1913, Alfred North Whitehead and Bertrand Russell published Principia Mathematica on the foundations of mathematics, attempting to derive mathematical truths from axioms and inference rules in symbolic logic. In 1931, Gödel raised serious problems with the foundationalist program and logic ceased to focus on such issues.
The development of logic since Frege, Russell, and Wittgenstein had a profound influence on the practice of philosophy and the perceived nature of philosophical problems (see analytic philosophy) and philosophy of mathematics. Logic, especially sentential logic, is implemented in computer logic circuits and is fundamental to computer science. Logic is commonly taught by university philosophy, sociology, advertising and literature departments, often as a compulsory discipline.
The two most important types of logical calculi are propositional (or sentential) calculi and functional (or predicate) calculi. A propositional calculus is a system containing propositional variables and connectives (some also contain propositional constants) but not individual or functional variables or constants. In the extended propositional calculus, quantifiers whose operator variables are propositional variables are added.
|Library resources about |